00:00:00.001 Started by upstream project "autotest-per-patch" build number 132301 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.045 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.046 The recommended git tool is: git 00:00:00.046 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.081 Fetching changes from the remote Git repository 00:00:00.083 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.148 Using shallow fetch with depth 1 00:00:00.148 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.148 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.365 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.376 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.387 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.387 > git config core.sparsecheckout # timeout=10 00:00:05.398 > git read-tree -mu HEAD # timeout=10 00:00:05.413 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.428 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.428 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.515 [Pipeline] Start of Pipeline 00:00:05.528 [Pipeline] library 00:00:05.530 Loading library shm_lib@master 00:00:05.530 Library shm_lib@master is cached. Copying from home. 00:00:05.546 [Pipeline] node 00:00:05.556 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.558 [Pipeline] { 00:00:05.568 [Pipeline] catchError 00:00:05.570 [Pipeline] { 00:00:05.579 [Pipeline] wrap 00:00:05.586 [Pipeline] { 00:00:05.591 [Pipeline] stage 00:00:05.592 [Pipeline] { (Prologue) 00:00:05.609 [Pipeline] echo 00:00:05.610 Node: VM-host-WFP1 00:00:05.617 [Pipeline] cleanWs 00:00:05.628 [WS-CLEANUP] Deleting project workspace... 00:00:05.628 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.635 [WS-CLEANUP] done 00:00:05.857 [Pipeline] setCustomBuildProperty 00:00:05.928 [Pipeline] httpRequest 00:00:07.536 [Pipeline] echo 00:00:07.538 Sorcerer 10.211.164.101 is alive 00:00:07.544 [Pipeline] retry 00:00:07.545 [Pipeline] { 00:00:07.555 [Pipeline] httpRequest 00:00:07.559 HttpMethod: GET 00:00:07.559 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.560 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.586 Response Code: HTTP/1.1 200 OK 00:00:07.586 Success: Status code 200 is in the accepted range: 200,404 00:00:07.587 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.540 [Pipeline] } 00:00:16.557 [Pipeline] // retry 00:00:16.564 [Pipeline] sh 00:00:16.846 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.862 [Pipeline] httpRequest 00:00:18.675 [Pipeline] echo 00:00:18.677 Sorcerer 10.211.164.101 is alive 00:00:18.686 [Pipeline] retry 00:00:18.688 [Pipeline] { 00:00:18.700 [Pipeline] httpRequest 00:00:18.704 HttpMethod: GET 00:00:18.704 URL: http://10.211.164.101/packages/spdk_57db986b9dee20a7155b22b0a6cc3d469a05d021.tar.gz 00:00:18.705 Sending request to url: http://10.211.164.101/packages/spdk_57db986b9dee20a7155b22b0a6cc3d469a05d021.tar.gz 00:00:18.728 Response Code: HTTP/1.1 200 OK 00:00:18.729 Success: Status code 200 is in the accepted range: 200,404 00:00:18.729 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_57db986b9dee20a7155b22b0a6cc3d469a05d021.tar.gz 00:01:35.331 [Pipeline] } 00:01:35.349 [Pipeline] // retry 00:01:35.357 [Pipeline] sh 00:01:35.641 + tar --no-same-owner -xf spdk_57db986b9dee20a7155b22b0a6cc3d469a05d021.tar.gz 00:01:38.225 [Pipeline] sh 00:01:38.508 + git -C spdk log --oneline -n5 00:01:38.508 57db986b9 bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:38.508 1c3ed84fd bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:38.508 514198259 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:38.508 59da1a1d7 nvmf: Expose DIF type of namespace to host again 00:01:38.508 9a34ab7f7 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:38.527 [Pipeline] writeFile 00:01:38.543 [Pipeline] sh 00:01:38.827 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:38.839 [Pipeline] sh 00:01:39.121 + cat autorun-spdk.conf 00:01:39.121 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.121 SPDK_TEST_NVME=1 00:01:39.121 SPDK_TEST_FTL=1 00:01:39.121 SPDK_TEST_ISAL=1 00:01:39.121 SPDK_RUN_ASAN=1 00:01:39.121 SPDK_RUN_UBSAN=1 00:01:39.121 SPDK_TEST_XNVME=1 00:01:39.121 SPDK_TEST_NVME_FDP=1 00:01:39.121 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.127 RUN_NIGHTLY=0 00:01:39.129 [Pipeline] } 00:01:39.142 [Pipeline] // stage 00:01:39.159 [Pipeline] stage 00:01:39.161 [Pipeline] { (Run VM) 00:01:39.174 [Pipeline] sh 00:01:39.456 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:39.456 + echo 'Start stage prepare_nvme.sh' 00:01:39.456 Start stage prepare_nvme.sh 00:01:39.456 + [[ -n 0 ]] 00:01:39.456 + disk_prefix=ex0 00:01:39.456 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:39.456 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:39.456 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:39.456 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.456 ++ SPDK_TEST_NVME=1 00:01:39.456 ++ SPDK_TEST_FTL=1 00:01:39.456 ++ SPDK_TEST_ISAL=1 00:01:39.456 ++ SPDK_RUN_ASAN=1 00:01:39.456 ++ SPDK_RUN_UBSAN=1 00:01:39.456 ++ SPDK_TEST_XNVME=1 00:01:39.456 ++ SPDK_TEST_NVME_FDP=1 00:01:39.456 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.456 ++ RUN_NIGHTLY=0 00:01:39.456 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:39.456 + nvme_files=() 00:01:39.456 + declare -A nvme_files 00:01:39.456 + backend_dir=/var/lib/libvirt/images/backends 00:01:39.456 + nvme_files['nvme.img']=5G 00:01:39.456 + nvme_files['nvme-cmb.img']=5G 00:01:39.456 + nvme_files['nvme-multi0.img']=4G 00:01:39.456 + nvme_files['nvme-multi1.img']=4G 00:01:39.456 + nvme_files['nvme-multi2.img']=4G 00:01:39.456 + nvme_files['nvme-openstack.img']=8G 00:01:39.456 + nvme_files['nvme-zns.img']=5G 00:01:39.456 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:39.456 + (( SPDK_TEST_FTL == 1 )) 00:01:39.456 + nvme_files["nvme-ftl.img"]=6G 00:01:39.456 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:39.456 + nvme_files["nvme-fdp.img"]=1G 00:01:39.456 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:39.456 + for nvme in "${!nvme_files[@]}" 00:01:39.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:39.456 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.456 + for nvme in "${!nvme_files[@]}" 00:01:39.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:01:39.715 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:39.715 + for nvme in "${!nvme_files[@]}" 00:01:39.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:39.715 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.715 + for nvme in "${!nvme_files[@]}" 00:01:39.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:39.715 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:39.715 + for nvme in "${!nvme_files[@]}" 00:01:39.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:39.715 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.715 + for nvme in "${!nvme_files[@]}" 00:01:39.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:39.974 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.974 + for nvme in "${!nvme_files[@]}" 00:01:39.974 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:40.232 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.232 + for nvme in "${!nvme_files[@]}" 00:01:40.232 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:01:40.232 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:40.232 + for nvme in "${!nvme_files[@]}" 00:01:40.232 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:40.491 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.491 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:40.491 + echo 'End stage prepare_nvme.sh' 00:01:40.491 End stage prepare_nvme.sh 00:01:40.503 [Pipeline] sh 00:01:40.784 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:40.784 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:40.784 00:01:40.784 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:40.784 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:40.784 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:40.784 HELP=0 00:01:40.784 DRY_RUN=0 00:01:40.784 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:01:40.784 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:40.784 NVME_AUTO_CREATE=0 00:01:40.784 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:01:40.784 NVME_CMB=,,,, 00:01:40.784 NVME_PMR=,,,, 00:01:40.784 NVME_ZNS=,,,, 00:01:40.784 NVME_MS=true,,,, 00:01:40.784 NVME_FDP=,,,on, 00:01:40.784 SPDK_VAGRANT_DISTRO=fedora39 00:01:40.784 SPDK_VAGRANT_VMCPU=10 00:01:40.784 SPDK_VAGRANT_VMRAM=12288 00:01:40.784 SPDK_VAGRANT_PROVIDER=libvirt 00:01:40.784 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:40.784 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:40.784 SPDK_OPENSTACK_NETWORK=0 00:01:40.784 VAGRANT_PACKAGE_BOX=0 00:01:40.784 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:40.784 FORCE_DISTRO=true 00:01:40.784 VAGRANT_BOX_VERSION= 00:01:40.784 EXTRA_VAGRANTFILES= 00:01:40.784 NIC_MODEL=e1000 00:01:40.784 00:01:40.784 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:40.784 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:43.318 Bringing machine 'default' up with 'libvirt' provider... 00:01:44.254 ==> default: Creating image (snapshot of base box volume). 00:01:44.526 ==> default: Creating domain with the following settings... 00:01:44.526 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731668720_bd3ffb31701e7c2a7b6f 00:01:44.526 ==> default: -- Domain type: kvm 00:01:44.526 ==> default: -- Cpus: 10 00:01:44.526 ==> default: -- Feature: acpi 00:01:44.526 ==> default: -- Feature: apic 00:01:44.526 ==> default: -- Feature: pae 00:01:44.526 ==> default: -- Memory: 12288M 00:01:44.526 ==> default: -- Memory Backing: hugepages: 00:01:44.526 ==> default: -- Management MAC: 00:01:44.526 ==> default: -- Loader: 00:01:44.526 ==> default: -- Nvram: 00:01:44.526 ==> default: -- Base box: spdk/fedora39 00:01:44.526 ==> default: -- Storage pool: default 00:01:44.526 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731668720_bd3ffb31701e7c2a7b6f.img (20G) 00:01:44.526 ==> default: -- Volume Cache: default 00:01:44.526 ==> default: -- Kernel: 00:01:44.526 ==> default: -- Initrd: 00:01:44.527 ==> default: -- Graphics Type: vnc 00:01:44.527 ==> default: -- Graphics Port: -1 00:01:44.527 ==> default: -- Graphics IP: 127.0.0.1 00:01:44.527 ==> default: -- Graphics Password: Not defined 00:01:44.527 ==> default: -- Video Type: cirrus 00:01:44.527 ==> default: -- Video VRAM: 9216 00:01:44.527 ==> default: -- Sound Type: 00:01:44.527 ==> default: -- Keymap: en-us 00:01:44.527 ==> default: -- TPM Path: 00:01:44.527 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:44.527 ==> default: -- Command line args: 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:44.527 ==> default: -> value=-drive, 00:01:44.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:44.527 ==> default: -> value=-drive, 00:01:44.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:44.527 ==> default: -> value=-drive, 00:01:44.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.527 ==> default: -> value=-drive, 00:01:44.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.527 ==> default: -> value=-drive, 00:01:44.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:44.527 ==> default: -> value=-drive, 00:01:44.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:44.527 ==> default: -> value=-device, 00:01:44.527 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.789 ==> default: Creating shared folders metadata... 00:01:44.789 ==> default: Starting domain. 00:01:46.694 ==> default: Waiting for domain to get an IP address... 00:02:01.587 ==> default: Waiting for SSH to become available... 00:02:03.491 ==> default: Configuring and enabling network interfaces... 00:02:08.762 default: SSH address: 192.168.121.33:22 00:02:08.762 default: SSH username: vagrant 00:02:08.762 default: SSH auth method: private key 00:02:12.050 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:20.167 ==> default: Mounting SSHFS shared folder... 00:02:22.737 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:22.737 ==> default: Checking Mount.. 00:02:24.115 ==> default: Folder Successfully Mounted! 00:02:24.115 ==> default: Running provisioner: file... 00:02:25.053 default: ~/.gitconfig => .gitconfig 00:02:25.992 00:02:25.992 SUCCESS! 00:02:25.992 00:02:25.992 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:25.992 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:25.992 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:25.992 00:02:26.000 [Pipeline] } 00:02:26.013 [Pipeline] // stage 00:02:26.021 [Pipeline] dir 00:02:26.022 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:26.024 [Pipeline] { 00:02:26.035 [Pipeline] catchError 00:02:26.037 [Pipeline] { 00:02:26.046 [Pipeline] sh 00:02:26.323 + vagrant ssh-config --host vagrant 00:02:26.323 + sed -ne /^Host/,$p 00:02:26.323 + tee ssh_conf 00:02:29.614 Host vagrant 00:02:29.614 HostName 192.168.121.33 00:02:29.614 User vagrant 00:02:29.614 Port 22 00:02:29.614 UserKnownHostsFile /dev/null 00:02:29.614 StrictHostKeyChecking no 00:02:29.614 PasswordAuthentication no 00:02:29.614 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:29.614 IdentitiesOnly yes 00:02:29.614 LogLevel FATAL 00:02:29.614 ForwardAgent yes 00:02:29.614 ForwardX11 yes 00:02:29.614 00:02:29.627 [Pipeline] withEnv 00:02:29.629 [Pipeline] { 00:02:29.640 [Pipeline] sh 00:02:29.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:29.933 source /etc/os-release 00:02:29.933 [[ -e /image.version ]] && img=$(< /image.version) 00:02:29.933 # Minimal, systemd-like check. 00:02:29.933 if [[ -e /.dockerenv ]]; then 00:02:29.933 # Clear garbage from the node's name: 00:02:29.933 # agt-er_autotest_547-896 -> autotest_547-896 00:02:29.933 # $HOSTNAME is the actual container id 00:02:29.933 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:29.933 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:29.933 # We can assume this is a mount from a host where container is running, 00:02:29.933 # so fetch its hostname to easily identify the target swarm worker. 00:02:29.933 container="$(< /etc/hostname) ($agent)" 00:02:29.933 else 00:02:29.933 # Fallback 00:02:29.933 container=$agent 00:02:29.933 fi 00:02:29.933 fi 00:02:29.933 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:29.933 00:02:30.321 [Pipeline] } 00:02:30.331 [Pipeline] // withEnv 00:02:30.336 [Pipeline] setCustomBuildProperty 00:02:30.345 [Pipeline] stage 00:02:30.347 [Pipeline] { (Tests) 00:02:30.360 [Pipeline] sh 00:02:30.639 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:30.910 [Pipeline] sh 00:02:31.192 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:31.464 [Pipeline] timeout 00:02:31.464 Timeout set to expire in 50 min 00:02:31.466 [Pipeline] { 00:02:31.478 [Pipeline] sh 00:02:31.757 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:32.324 HEAD is now at 57db986b9 bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:32.336 [Pipeline] sh 00:02:32.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:32.890 [Pipeline] sh 00:02:33.171 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:33.446 [Pipeline] sh 00:02:33.730 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:33.989 ++ readlink -f spdk_repo 00:02:33.989 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:33.989 + [[ -n /home/vagrant/spdk_repo ]] 00:02:33.989 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:33.989 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:33.989 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:33.990 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:33.990 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:33.990 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:33.990 + cd /home/vagrant/spdk_repo 00:02:33.990 + source /etc/os-release 00:02:33.990 ++ NAME='Fedora Linux' 00:02:33.990 ++ VERSION='39 (Cloud Edition)' 00:02:33.990 ++ ID=fedora 00:02:33.990 ++ VERSION_ID=39 00:02:33.990 ++ VERSION_CODENAME= 00:02:33.990 ++ PLATFORM_ID=platform:f39 00:02:33.990 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:33.990 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:33.990 ++ LOGO=fedora-logo-icon 00:02:33.990 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:33.990 ++ HOME_URL=https://fedoraproject.org/ 00:02:33.990 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:33.990 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:33.990 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:33.990 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:33.990 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:33.990 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:33.990 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:33.990 ++ SUPPORT_END=2024-11-12 00:02:33.990 ++ VARIANT='Cloud Edition' 00:02:33.990 ++ VARIANT_ID=cloud 00:02:33.990 + uname -a 00:02:33.990 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:33.990 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:34.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:34.815 Hugepages 00:02:34.815 node hugesize free / total 00:02:34.815 node0 1048576kB 0 / 0 00:02:34.815 node0 2048kB 0 / 0 00:02:34.815 00:02:34.815 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.815 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:34.815 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:34.815 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:34.815 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:35.075 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:35.075 + rm -f /tmp/spdk-ld-path 00:02:35.075 + source autorun-spdk.conf 00:02:35.075 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.075 ++ SPDK_TEST_NVME=1 00:02:35.075 ++ SPDK_TEST_FTL=1 00:02:35.075 ++ SPDK_TEST_ISAL=1 00:02:35.075 ++ SPDK_RUN_ASAN=1 00:02:35.075 ++ SPDK_RUN_UBSAN=1 00:02:35.075 ++ SPDK_TEST_XNVME=1 00:02:35.075 ++ SPDK_TEST_NVME_FDP=1 00:02:35.075 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.075 ++ RUN_NIGHTLY=0 00:02:35.075 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.075 + [[ -n '' ]] 00:02:35.075 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.075 + for M in /var/spdk/build-*-manifest.txt 00:02:35.075 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.075 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.075 + for M in /var/spdk/build-*-manifest.txt 00:02:35.075 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.075 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.075 + for M in /var/spdk/build-*-manifest.txt 00:02:35.075 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.075 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.075 ++ uname 00:02:35.075 + [[ Linux == \L\i\n\u\x ]] 00:02:35.075 + sudo dmesg -T 00:02:35.075 + sudo dmesg --clear 00:02:35.075 + dmesg_pid=5244 00:02:35.075 + [[ Fedora Linux == FreeBSD ]] 00:02:35.075 + sudo dmesg -Tw 00:02:35.075 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.075 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.075 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.075 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.075 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.075 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.075 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.075 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.075 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.075 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.075 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.075 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.075 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.075 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.075 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.335 11:06:12 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:35.335 11:06:12 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.335 11:06:12 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:35.335 11:06:12 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:35.335 11:06:12 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.335 11:06:12 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:35.335 11:06:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.335 11:06:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.335 11:06:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.335 11:06:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.335 11:06:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.335 11:06:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.335 11:06:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.335 11:06:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.335 11:06:12 -- paths/export.sh@5 -- $ export PATH 00:02:35.335 11:06:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.335 11:06:12 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.335 11:06:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:35.335 11:06:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731668772.XXXXXX 00:02:35.335 11:06:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731668772.B44G94 00:02:35.335 11:06:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:35.335 11:06:12 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:35.335 11:06:12 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:35.335 11:06:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.335 11:06:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.335 11:06:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:35.335 11:06:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:35.335 11:06:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.335 11:06:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:35.335 11:06:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:35.335 11:06:12 -- pm/common@17 -- $ local monitor 00:02:35.335 11:06:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.335 11:06:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.335 11:06:12 -- pm/common@21 -- $ date +%s 00:02:35.336 11:06:12 -- pm/common@25 -- $ sleep 1 00:02:35.336 11:06:12 -- pm/common@21 -- $ date +%s 00:02:35.336 11:06:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731668772 00:02:35.336 11:06:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731668772 00:02:35.594 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731668772_collect-vmstat.pm.log 00:02:35.594 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731668772_collect-cpu-load.pm.log 00:02:36.530 11:06:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:36.530 11:06:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.530 11:06:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.530 11:06:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.530 11:06:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.530 Fri Nov 15 11:06:13 AM UTC 2024 00:02:36.530 11:06:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.530 v25.01-pre-217-g57db986b9 00:02:36.530 11:06:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:36.530 11:06:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:36.530 11:06:13 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:36.530 11:06:13 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:36.530 11:06:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.530 ************************************ 00:02:36.530 START TEST asan 00:02:36.530 ************************************ 00:02:36.530 using asan 00:02:36.530 11:06:13 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:36.530 00:02:36.530 real 0m0.000s 00:02:36.530 user 0m0.000s 00:02:36.530 sys 0m0.000s 00:02:36.530 11:06:13 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:36.530 ************************************ 00:02:36.530 END TEST asan 00:02:36.530 ************************************ 00:02:36.530 11:06:13 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.530 11:06:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.530 11:06:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.530 11:06:13 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:36.530 11:06:13 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:36.530 11:06:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.530 ************************************ 00:02:36.530 START TEST ubsan 00:02:36.530 ************************************ 00:02:36.530 using ubsan 00:02:36.530 11:06:13 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:36.530 00:02:36.530 real 0m0.000s 00:02:36.530 user 0m0.000s 00:02:36.530 sys 0m0.000s 00:02:36.530 11:06:13 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:36.530 ************************************ 00:02:36.530 END TEST ubsan 00:02:36.530 ************************************ 00:02:36.530 11:06:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.530 11:06:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:36.530 11:06:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:36.530 11:06:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:36.530 11:06:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:36.530 11:06:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.530 11:06:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.530 11:06:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.530 11:06:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:36.531 11:06:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:36.789 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:36.789 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:37.356 Using 'verbs' RDMA provider 00:02:53.689 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:11.789 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:11.789 Creating mk/config.mk...done. 00:03:11.789 Creating mk/cc.flags.mk...done. 00:03:11.789 Type 'make' to build. 00:03:11.789 11:06:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:11.789 11:06:47 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:11.789 11:06:47 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:11.789 11:06:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.789 ************************************ 00:03:11.789 START TEST make 00:03:11.789 ************************************ 00:03:11.789 11:06:47 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:11.789 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:11.789 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:11.789 meson setup builddir \ 00:03:11.789 -Dwith-libaio=enabled \ 00:03:11.789 -Dwith-liburing=enabled \ 00:03:11.789 -Dwith-libvfn=disabled \ 00:03:11.789 -Dwith-spdk=disabled \ 00:03:11.789 -Dexamples=false \ 00:03:11.789 -Dtests=false \ 00:03:11.789 -Dtools=false && \ 00:03:11.789 meson compile -C builddir && \ 00:03:11.789 cd -) 00:03:11.789 make[1]: Nothing to be done for 'all'. 00:03:13.169 The Meson build system 00:03:13.169 Version: 1.5.0 00:03:13.169 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:13.169 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:13.169 Build type: native build 00:03:13.169 Project name: xnvme 00:03:13.169 Project version: 0.7.5 00:03:13.169 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:13.169 C linker for the host machine: cc ld.bfd 2.40-14 00:03:13.169 Host machine cpu family: x86_64 00:03:13.169 Host machine cpu: x86_64 00:03:13.169 Message: host_machine.system: linux 00:03:13.169 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:13.169 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:13.169 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:13.169 Run-time dependency threads found: YES 00:03:13.169 Has header "setupapi.h" : NO 00:03:13.169 Has header "linux/blkzoned.h" : YES 00:03:13.169 Has header "linux/blkzoned.h" : YES (cached) 00:03:13.169 Has header "libaio.h" : YES 00:03:13.169 Library aio found: YES 00:03:13.169 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:13.169 Run-time dependency liburing found: YES 2.2 00:03:13.169 Dependency libvfn skipped: feature with-libvfn disabled 00:03:13.169 Found CMake: /usr/bin/cmake (3.27.7) 00:03:13.169 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:13.169 Subproject spdk : skipped: feature with-spdk disabled 00:03:13.169 Run-time dependency appleframeworks found: NO (tried framework) 00:03:13.169 Run-time dependency appleframeworks found: NO (tried framework) 00:03:13.169 Library rt found: YES 00:03:13.169 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:13.169 Configuring xnvme_config.h using configuration 00:03:13.169 Configuring xnvme.spec using configuration 00:03:13.169 Run-time dependency bash-completion found: YES 2.11 00:03:13.169 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:13.169 Program cp found: YES (/usr/bin/cp) 00:03:13.169 Build targets in project: 3 00:03:13.169 00:03:13.169 xnvme 0.7.5 00:03:13.169 00:03:13.169 Subprojects 00:03:13.169 spdk : NO Feature 'with-spdk' disabled 00:03:13.169 00:03:13.169 User defined options 00:03:13.169 examples : false 00:03:13.169 tests : false 00:03:13.169 tools : false 00:03:13.169 with-libaio : enabled 00:03:13.169 with-liburing: enabled 00:03:13.169 with-libvfn : disabled 00:03:13.169 with-spdk : disabled 00:03:13.169 00:03:13.169 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.428 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:13.428 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:13.428 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:13.428 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:13.428 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:13.428 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:13.428 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:13.428 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:13.428 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:13.428 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:13.428 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:13.687 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:13.687 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:13.687 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:13.687 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:13.687 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:13.687 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:13.687 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:13.687 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:13.687 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:13.687 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:13.687 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:13.687 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:13.687 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:13.687 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:13.687 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:13.687 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:13.687 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:13.687 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:13.687 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:13.687 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:13.687 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:13.687 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:13.688 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:13.688 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:13.688 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:13.688 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:13.947 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:13.947 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:13.947 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:13.947 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:13.947 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:13.947 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:13.947 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:13.947 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:13.947 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:13.947 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:13.947 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:13.947 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:13.947 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:13.947 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:13.947 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:13.947 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:13.947 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:13.947 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:13.947 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:13.947 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:13.947 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:13.947 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:13.947 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:13.947 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:13.947 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:13.947 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:14.206 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:14.206 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:14.206 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:14.206 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:14.206 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:14.206 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:14.206 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:14.206 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:14.206 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:14.206 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:14.206 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:14.465 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:14.465 [75/76] Linking static target lib/libxnvme.a 00:03:14.465 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:14.465 INFO: autodetecting backend as ninja 00:03:14.465 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:14.725 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:21.348 The Meson build system 00:03:21.348 Version: 1.5.0 00:03:21.348 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:21.348 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:21.348 Build type: native build 00:03:21.348 Program cat found: YES (/usr/bin/cat) 00:03:21.348 Project name: DPDK 00:03:21.348 Project version: 24.03.0 00:03:21.348 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:21.348 C linker for the host machine: cc ld.bfd 2.40-14 00:03:21.348 Host machine cpu family: x86_64 00:03:21.348 Host machine cpu: x86_64 00:03:21.348 Message: ## Building in Developer Mode ## 00:03:21.348 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:21.349 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:21.349 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:21.349 Program python3 found: YES (/usr/bin/python3) 00:03:21.349 Program cat found: YES (/usr/bin/cat) 00:03:21.349 Compiler for C supports arguments -march=native: YES 00:03:21.349 Checking for size of "void *" : 8 00:03:21.349 Checking for size of "void *" : 8 (cached) 00:03:21.349 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:21.349 Library m found: YES 00:03:21.349 Library numa found: YES 00:03:21.349 Has header "numaif.h" : YES 00:03:21.349 Library fdt found: NO 00:03:21.349 Library execinfo found: NO 00:03:21.349 Has header "execinfo.h" : YES 00:03:21.349 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:21.349 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:21.349 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:21.349 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:21.349 Run-time dependency openssl found: YES 3.1.1 00:03:21.349 Run-time dependency libpcap found: YES 1.10.4 00:03:21.349 Has header "pcap.h" with dependency libpcap: YES 00:03:21.349 Compiler for C supports arguments -Wcast-qual: YES 00:03:21.349 Compiler for C supports arguments -Wdeprecated: YES 00:03:21.349 Compiler for C supports arguments -Wformat: YES 00:03:21.349 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:21.349 Compiler for C supports arguments -Wformat-security: NO 00:03:21.349 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:21.349 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:21.349 Compiler for C supports arguments -Wnested-externs: YES 00:03:21.349 Compiler for C supports arguments -Wold-style-definition: YES 00:03:21.349 Compiler for C supports arguments -Wpointer-arith: YES 00:03:21.349 Compiler for C supports arguments -Wsign-compare: YES 00:03:21.349 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:21.349 Compiler for C supports arguments -Wundef: YES 00:03:21.349 Compiler for C supports arguments -Wwrite-strings: YES 00:03:21.349 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:21.349 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:21.349 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:21.349 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:21.349 Program objdump found: YES (/usr/bin/objdump) 00:03:21.349 Compiler for C supports arguments -mavx512f: YES 00:03:21.349 Checking if "AVX512 checking" compiles: YES 00:03:21.349 Fetching value of define "__SSE4_2__" : 1 00:03:21.349 Fetching value of define "__AES__" : 1 00:03:21.349 Fetching value of define "__AVX__" : 1 00:03:21.349 Fetching value of define "__AVX2__" : 1 00:03:21.349 Fetching value of define "__AVX512BW__" : 1 00:03:21.349 Fetching value of define "__AVX512CD__" : 1 00:03:21.349 Fetching value of define "__AVX512DQ__" : 1 00:03:21.349 Fetching value of define "__AVX512F__" : 1 00:03:21.349 Fetching value of define "__AVX512VL__" : 1 00:03:21.349 Fetching value of define "__PCLMUL__" : 1 00:03:21.349 Fetching value of define "__RDRND__" : 1 00:03:21.349 Fetching value of define "__RDSEED__" : 1 00:03:21.349 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:21.349 Fetching value of define "__znver1__" : (undefined) 00:03:21.349 Fetching value of define "__znver2__" : (undefined) 00:03:21.349 Fetching value of define "__znver3__" : (undefined) 00:03:21.349 Fetching value of define "__znver4__" : (undefined) 00:03:21.349 Library asan found: YES 00:03:21.349 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:21.349 Message: lib/log: Defining dependency "log" 00:03:21.349 Message: lib/kvargs: Defining dependency "kvargs" 00:03:21.349 Message: lib/telemetry: Defining dependency "telemetry" 00:03:21.349 Library rt found: YES 00:03:21.349 Checking for function "getentropy" : NO 00:03:21.349 Message: lib/eal: Defining dependency "eal" 00:03:21.349 Message: lib/ring: Defining dependency "ring" 00:03:21.349 Message: lib/rcu: Defining dependency "rcu" 00:03:21.349 Message: lib/mempool: Defining dependency "mempool" 00:03:21.349 Message: lib/mbuf: Defining dependency "mbuf" 00:03:21.349 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:21.349 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:21.349 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:21.349 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:21.349 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:21.349 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:21.349 Compiler for C supports arguments -mpclmul: YES 00:03:21.349 Compiler for C supports arguments -maes: YES 00:03:21.349 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:21.349 Compiler for C supports arguments -mavx512bw: YES 00:03:21.349 Compiler for C supports arguments -mavx512dq: YES 00:03:21.349 Compiler for C supports arguments -mavx512vl: YES 00:03:21.349 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:21.349 Compiler for C supports arguments -mavx2: YES 00:03:21.349 Compiler for C supports arguments -mavx: YES 00:03:21.349 Message: lib/net: Defining dependency "net" 00:03:21.349 Message: lib/meter: Defining dependency "meter" 00:03:21.349 Message: lib/ethdev: Defining dependency "ethdev" 00:03:21.349 Message: lib/pci: Defining dependency "pci" 00:03:21.349 Message: lib/cmdline: Defining dependency "cmdline" 00:03:21.349 Message: lib/hash: Defining dependency "hash" 00:03:21.349 Message: lib/timer: Defining dependency "timer" 00:03:21.349 Message: lib/compressdev: Defining dependency "compressdev" 00:03:21.349 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:21.349 Message: lib/dmadev: Defining dependency "dmadev" 00:03:21.349 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:21.349 Message: lib/power: Defining dependency "power" 00:03:21.349 Message: lib/reorder: Defining dependency "reorder" 00:03:21.349 Message: lib/security: Defining dependency "security" 00:03:21.349 Has header "linux/userfaultfd.h" : YES 00:03:21.349 Has header "linux/vduse.h" : YES 00:03:21.349 Message: lib/vhost: Defining dependency "vhost" 00:03:21.349 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:21.349 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:21.349 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:21.349 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:21.349 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:21.349 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:21.349 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:21.349 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:21.349 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:21.349 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:21.349 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:21.349 Configuring doxy-api-html.conf using configuration 00:03:21.349 Configuring doxy-api-man.conf using configuration 00:03:21.349 Program mandb found: YES (/usr/bin/mandb) 00:03:21.349 Program sphinx-build found: NO 00:03:21.349 Configuring rte_build_config.h using configuration 00:03:21.349 Message: 00:03:21.349 ================= 00:03:21.349 Applications Enabled 00:03:21.349 ================= 00:03:21.349 00:03:21.349 apps: 00:03:21.349 00:03:21.349 00:03:21.349 Message: 00:03:21.349 ================= 00:03:21.349 Libraries Enabled 00:03:21.349 ================= 00:03:21.349 00:03:21.349 libs: 00:03:21.349 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:21.349 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:21.349 cryptodev, dmadev, power, reorder, security, vhost, 00:03:21.349 00:03:21.349 Message: 00:03:21.349 =============== 00:03:21.349 Drivers Enabled 00:03:21.349 =============== 00:03:21.349 00:03:21.349 common: 00:03:21.349 00:03:21.349 bus: 00:03:21.349 pci, vdev, 00:03:21.349 mempool: 00:03:21.349 ring, 00:03:21.349 dma: 00:03:21.349 00:03:21.349 net: 00:03:21.349 00:03:21.349 crypto: 00:03:21.349 00:03:21.349 compress: 00:03:21.349 00:03:21.349 vdpa: 00:03:21.349 00:03:21.349 00:03:21.349 Message: 00:03:21.349 ================= 00:03:21.349 Content Skipped 00:03:21.349 ================= 00:03:21.349 00:03:21.349 apps: 00:03:21.349 dumpcap: explicitly disabled via build config 00:03:21.349 graph: explicitly disabled via build config 00:03:21.349 pdump: explicitly disabled via build config 00:03:21.349 proc-info: explicitly disabled via build config 00:03:21.349 test-acl: explicitly disabled via build config 00:03:21.349 test-bbdev: explicitly disabled via build config 00:03:21.350 test-cmdline: explicitly disabled via build config 00:03:21.350 test-compress-perf: explicitly disabled via build config 00:03:21.350 test-crypto-perf: explicitly disabled via build config 00:03:21.350 test-dma-perf: explicitly disabled via build config 00:03:21.350 test-eventdev: explicitly disabled via build config 00:03:21.350 test-fib: explicitly disabled via build config 00:03:21.350 test-flow-perf: explicitly disabled via build config 00:03:21.350 test-gpudev: explicitly disabled via build config 00:03:21.350 test-mldev: explicitly disabled via build config 00:03:21.350 test-pipeline: explicitly disabled via build config 00:03:21.350 test-pmd: explicitly disabled via build config 00:03:21.350 test-regex: explicitly disabled via build config 00:03:21.350 test-sad: explicitly disabled via build config 00:03:21.350 test-security-perf: explicitly disabled via build config 00:03:21.350 00:03:21.350 libs: 00:03:21.350 argparse: explicitly disabled via build config 00:03:21.350 metrics: explicitly disabled via build config 00:03:21.350 acl: explicitly disabled via build config 00:03:21.350 bbdev: explicitly disabled via build config 00:03:21.350 bitratestats: explicitly disabled via build config 00:03:21.350 bpf: explicitly disabled via build config 00:03:21.350 cfgfile: explicitly disabled via build config 00:03:21.350 distributor: explicitly disabled via build config 00:03:21.350 efd: explicitly disabled via build config 00:03:21.350 eventdev: explicitly disabled via build config 00:03:21.350 dispatcher: explicitly disabled via build config 00:03:21.350 gpudev: explicitly disabled via build config 00:03:21.350 gro: explicitly disabled via build config 00:03:21.350 gso: explicitly disabled via build config 00:03:21.350 ip_frag: explicitly disabled via build config 00:03:21.350 jobstats: explicitly disabled via build config 00:03:21.350 latencystats: explicitly disabled via build config 00:03:21.350 lpm: explicitly disabled via build config 00:03:21.350 member: explicitly disabled via build config 00:03:21.350 pcapng: explicitly disabled via build config 00:03:21.350 rawdev: explicitly disabled via build config 00:03:21.350 regexdev: explicitly disabled via build config 00:03:21.350 mldev: explicitly disabled via build config 00:03:21.350 rib: explicitly disabled via build config 00:03:21.350 sched: explicitly disabled via build config 00:03:21.350 stack: explicitly disabled via build config 00:03:21.350 ipsec: explicitly disabled via build config 00:03:21.350 pdcp: explicitly disabled via build config 00:03:21.350 fib: explicitly disabled via build config 00:03:21.350 port: explicitly disabled via build config 00:03:21.350 pdump: explicitly disabled via build config 00:03:21.350 table: explicitly disabled via build config 00:03:21.350 pipeline: explicitly disabled via build config 00:03:21.350 graph: explicitly disabled via build config 00:03:21.350 node: explicitly disabled via build config 00:03:21.350 00:03:21.350 drivers: 00:03:21.350 common/cpt: not in enabled drivers build config 00:03:21.350 common/dpaax: not in enabled drivers build config 00:03:21.350 common/iavf: not in enabled drivers build config 00:03:21.350 common/idpf: not in enabled drivers build config 00:03:21.350 common/ionic: not in enabled drivers build config 00:03:21.350 common/mvep: not in enabled drivers build config 00:03:21.350 common/octeontx: not in enabled drivers build config 00:03:21.350 bus/auxiliary: not in enabled drivers build config 00:03:21.350 bus/cdx: not in enabled drivers build config 00:03:21.350 bus/dpaa: not in enabled drivers build config 00:03:21.350 bus/fslmc: not in enabled drivers build config 00:03:21.350 bus/ifpga: not in enabled drivers build config 00:03:21.350 bus/platform: not in enabled drivers build config 00:03:21.350 bus/uacce: not in enabled drivers build config 00:03:21.350 bus/vmbus: not in enabled drivers build config 00:03:21.350 common/cnxk: not in enabled drivers build config 00:03:21.350 common/mlx5: not in enabled drivers build config 00:03:21.350 common/nfp: not in enabled drivers build config 00:03:21.350 common/nitrox: not in enabled drivers build config 00:03:21.350 common/qat: not in enabled drivers build config 00:03:21.350 common/sfc_efx: not in enabled drivers build config 00:03:21.350 mempool/bucket: not in enabled drivers build config 00:03:21.350 mempool/cnxk: not in enabled drivers build config 00:03:21.350 mempool/dpaa: not in enabled drivers build config 00:03:21.350 mempool/dpaa2: not in enabled drivers build config 00:03:21.350 mempool/octeontx: not in enabled drivers build config 00:03:21.350 mempool/stack: not in enabled drivers build config 00:03:21.350 dma/cnxk: not in enabled drivers build config 00:03:21.350 dma/dpaa: not in enabled drivers build config 00:03:21.350 dma/dpaa2: not in enabled drivers build config 00:03:21.350 dma/hisilicon: not in enabled drivers build config 00:03:21.350 dma/idxd: not in enabled drivers build config 00:03:21.350 dma/ioat: not in enabled drivers build config 00:03:21.350 dma/skeleton: not in enabled drivers build config 00:03:21.350 net/af_packet: not in enabled drivers build config 00:03:21.350 net/af_xdp: not in enabled drivers build config 00:03:21.350 net/ark: not in enabled drivers build config 00:03:21.350 net/atlantic: not in enabled drivers build config 00:03:21.350 net/avp: not in enabled drivers build config 00:03:21.350 net/axgbe: not in enabled drivers build config 00:03:21.350 net/bnx2x: not in enabled drivers build config 00:03:21.350 net/bnxt: not in enabled drivers build config 00:03:21.350 net/bonding: not in enabled drivers build config 00:03:21.350 net/cnxk: not in enabled drivers build config 00:03:21.350 net/cpfl: not in enabled drivers build config 00:03:21.350 net/cxgbe: not in enabled drivers build config 00:03:21.350 net/dpaa: not in enabled drivers build config 00:03:21.350 net/dpaa2: not in enabled drivers build config 00:03:21.350 net/e1000: not in enabled drivers build config 00:03:21.350 net/ena: not in enabled drivers build config 00:03:21.350 net/enetc: not in enabled drivers build config 00:03:21.350 net/enetfec: not in enabled drivers build config 00:03:21.350 net/enic: not in enabled drivers build config 00:03:21.350 net/failsafe: not in enabled drivers build config 00:03:21.350 net/fm10k: not in enabled drivers build config 00:03:21.350 net/gve: not in enabled drivers build config 00:03:21.350 net/hinic: not in enabled drivers build config 00:03:21.350 net/hns3: not in enabled drivers build config 00:03:21.350 net/i40e: not in enabled drivers build config 00:03:21.350 net/iavf: not in enabled drivers build config 00:03:21.350 net/ice: not in enabled drivers build config 00:03:21.350 net/idpf: not in enabled drivers build config 00:03:21.350 net/igc: not in enabled drivers build config 00:03:21.350 net/ionic: not in enabled drivers build config 00:03:21.350 net/ipn3ke: not in enabled drivers build config 00:03:21.350 net/ixgbe: not in enabled drivers build config 00:03:21.350 net/mana: not in enabled drivers build config 00:03:21.350 net/memif: not in enabled drivers build config 00:03:21.350 net/mlx4: not in enabled drivers build config 00:03:21.350 net/mlx5: not in enabled drivers build config 00:03:21.350 net/mvneta: not in enabled drivers build config 00:03:21.350 net/mvpp2: not in enabled drivers build config 00:03:21.350 net/netvsc: not in enabled drivers build config 00:03:21.350 net/nfb: not in enabled drivers build config 00:03:21.350 net/nfp: not in enabled drivers build config 00:03:21.350 net/ngbe: not in enabled drivers build config 00:03:21.350 net/null: not in enabled drivers build config 00:03:21.350 net/octeontx: not in enabled drivers build config 00:03:21.350 net/octeon_ep: not in enabled drivers build config 00:03:21.350 net/pcap: not in enabled drivers build config 00:03:21.350 net/pfe: not in enabled drivers build config 00:03:21.350 net/qede: not in enabled drivers build config 00:03:21.350 net/ring: not in enabled drivers build config 00:03:21.350 net/sfc: not in enabled drivers build config 00:03:21.350 net/softnic: not in enabled drivers build config 00:03:21.350 net/tap: not in enabled drivers build config 00:03:21.350 net/thunderx: not in enabled drivers build config 00:03:21.350 net/txgbe: not in enabled drivers build config 00:03:21.350 net/vdev_netvsc: not in enabled drivers build config 00:03:21.350 net/vhost: not in enabled drivers build config 00:03:21.350 net/virtio: not in enabled drivers build config 00:03:21.350 net/vmxnet3: not in enabled drivers build config 00:03:21.350 raw/*: missing internal dependency, "rawdev" 00:03:21.350 crypto/armv8: not in enabled drivers build config 00:03:21.350 crypto/bcmfs: not in enabled drivers build config 00:03:21.350 crypto/caam_jr: not in enabled drivers build config 00:03:21.350 crypto/ccp: not in enabled drivers build config 00:03:21.350 crypto/cnxk: not in enabled drivers build config 00:03:21.350 crypto/dpaa_sec: not in enabled drivers build config 00:03:21.350 crypto/dpaa2_sec: not in enabled drivers build config 00:03:21.350 crypto/ipsec_mb: not in enabled drivers build config 00:03:21.350 crypto/mlx5: not in enabled drivers build config 00:03:21.350 crypto/mvsam: not in enabled drivers build config 00:03:21.350 crypto/nitrox: not in enabled drivers build config 00:03:21.350 crypto/null: not in enabled drivers build config 00:03:21.350 crypto/octeontx: not in enabled drivers build config 00:03:21.351 crypto/openssl: not in enabled drivers build config 00:03:21.351 crypto/scheduler: not in enabled drivers build config 00:03:21.351 crypto/uadk: not in enabled drivers build config 00:03:21.351 crypto/virtio: not in enabled drivers build config 00:03:21.351 compress/isal: not in enabled drivers build config 00:03:21.351 compress/mlx5: not in enabled drivers build config 00:03:21.351 compress/nitrox: not in enabled drivers build config 00:03:21.351 compress/octeontx: not in enabled drivers build config 00:03:21.351 compress/zlib: not in enabled drivers build config 00:03:21.351 regex/*: missing internal dependency, "regexdev" 00:03:21.351 ml/*: missing internal dependency, "mldev" 00:03:21.351 vdpa/ifc: not in enabled drivers build config 00:03:21.351 vdpa/mlx5: not in enabled drivers build config 00:03:21.351 vdpa/nfp: not in enabled drivers build config 00:03:21.351 vdpa/sfc: not in enabled drivers build config 00:03:21.351 event/*: missing internal dependency, "eventdev" 00:03:21.351 baseband/*: missing internal dependency, "bbdev" 00:03:21.351 gpu/*: missing internal dependency, "gpudev" 00:03:21.351 00:03:21.351 00:03:21.609 Build targets in project: 85 00:03:21.609 00:03:21.609 DPDK 24.03.0 00:03:21.609 00:03:21.609 User defined options 00:03:21.609 buildtype : debug 00:03:21.609 default_library : shared 00:03:21.609 libdir : lib 00:03:21.609 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:21.609 b_sanitize : address 00:03:21.609 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:21.609 c_link_args : 00:03:21.609 cpu_instruction_set: native 00:03:21.609 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:21.609 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:21.609 enable_docs : false 00:03:21.609 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:21.609 enable_kmods : false 00:03:21.609 max_lcores : 128 00:03:21.609 tests : false 00:03:21.609 00:03:21.609 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:22.176 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:22.176 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:22.176 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:22.176 [3/268] Linking static target lib/librte_kvargs.a 00:03:22.176 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:22.176 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:22.176 [6/268] Linking static target lib/librte_log.a 00:03:22.435 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:22.435 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:22.435 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:22.435 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.694 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:22.694 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:22.694 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:22.694 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:22.694 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:22.694 [16/268] Linking static target lib/librte_telemetry.a 00:03:22.694 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:22.694 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:23.262 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:23.262 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:23.262 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.262 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:23.262 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:23.262 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:23.262 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:23.262 [26/268] Linking target lib/librte_log.so.24.1 00:03:23.262 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:23.521 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:23.521 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:23.521 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:23.521 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:23.521 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:23.521 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.521 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:23.780 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:23.780 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:23.780 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:23.780 [38/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:23.780 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:23.780 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:23.780 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:23.780 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:23.780 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:23.780 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:24.038 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:24.039 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:24.297 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:24.297 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:24.297 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:24.297 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:24.297 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:24.297 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:24.556 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:24.556 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:24.556 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:24.815 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:24.815 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:24.815 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:24.815 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:24.815 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:24.815 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:24.815 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:24.815 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:25.074 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:25.074 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:25.074 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:25.332 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:25.332 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:25.332 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:25.591 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:25.591 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:25.591 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:25.591 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:25.591 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:25.591 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:25.591 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:25.591 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:25.591 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:25.848 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:25.848 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:25.848 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:25.848 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:26.106 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:26.106 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:26.106 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:26.106 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:26.106 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:26.364 [88/268] Linking static target lib/librte_eal.a 00:03:26.364 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:26.364 [90/268] Linking static target lib/librte_rcu.a 00:03:26.364 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:26.364 [92/268] Linking static target lib/librte_ring.a 00:03:26.364 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:26.364 [94/268] Linking static target lib/librte_mempool.a 00:03:26.364 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:26.624 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:26.624 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:26.624 [98/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:26.883 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:26.883 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:26.883 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:26.883 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.883 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.883 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:26.883 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:26.883 [106/268] Linking static target lib/librte_mbuf.a 00:03:27.142 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:27.142 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:27.142 [109/268] Linking static target lib/librte_meter.a 00:03:27.142 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:27.142 [111/268] Linking static target lib/librte_net.a 00:03:27.401 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:27.401 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:27.401 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:27.401 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:27.401 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.660 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.660 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.918 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:28.177 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.177 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:28.177 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:28.436 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:28.436 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:28.436 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:28.436 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:28.436 [127/268] Linking static target lib/librte_pci.a 00:03:28.760 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:28.760 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:28.760 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:28.760 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:28.760 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:28.760 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:28.760 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:28.760 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:28.760 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:29.018 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:29.018 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.018 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:29.018 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:29.018 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:29.018 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:29.018 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:29.018 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:29.018 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:29.277 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:29.277 [147/268] Linking static target lib/librte_cmdline.a 00:03:29.277 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:29.277 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:29.536 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:29.536 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:29.536 [152/268] Linking static target lib/librte_timer.a 00:03:29.536 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:29.795 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:29.795 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:29.795 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:29.795 [157/268] Linking static target lib/librte_ethdev.a 00:03:30.054 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:30.055 [159/268] Linking static target lib/librte_compressdev.a 00:03:30.055 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:30.055 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:30.055 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.055 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:30.313 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:30.313 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:30.571 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:30.572 [167/268] Linking static target lib/librte_dmadev.a 00:03:30.572 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:30.572 [169/268] Linking static target lib/librte_hash.a 00:03:30.572 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:30.572 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:30.572 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:30.830 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.830 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.830 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:31.115 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:31.115 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:31.115 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:31.115 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:31.115 [180/268] Linking static target lib/librte_cryptodev.a 00:03:31.115 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:31.374 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:31.374 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.374 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:31.374 [185/268] Linking static target lib/librte_power.a 00:03:31.633 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.633 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:31.633 [188/268] Linking static target lib/librte_reorder.a 00:03:31.633 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:31.892 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:31.892 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:31.892 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:31.892 [193/268] Linking static target lib/librte_security.a 00:03:32.457 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.457 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:32.716 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.716 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:32.716 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.974 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:32.974 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:32.974 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:33.233 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:33.233 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:33.233 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:33.491 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:33.491 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:33.750 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:33.750 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:33.750 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:33.750 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:33.750 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.008 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:34.009 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:34.009 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:34.009 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:34.009 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:34.009 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:34.009 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:34.009 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:34.009 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:34.009 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:34.267 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:34.267 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.267 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.267 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.267 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:34.527 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.096 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:38.386 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:38.386 [230/268] Linking static target lib/librte_vhost.a 00:03:38.954 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.954 [232/268] Linking target lib/librte_eal.so.24.1 00:03:39.213 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:39.213 [234/268] Linking target lib/librte_pci.so.24.1 00:03:39.213 [235/268] Linking target lib/librte_meter.so.24.1 00:03:39.213 [236/268] Linking target lib/librte_timer.so.24.1 00:03:39.213 [237/268] Linking target lib/librte_ring.so.24.1 00:03:39.213 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:39.213 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:39.213 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:39.213 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:39.213 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:39.471 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:39.471 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:39.471 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:39.471 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.471 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:39.471 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:39.471 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:39.471 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:39.471 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:39.471 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:39.730 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:39.730 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:39.730 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:39.730 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:39.730 [257/268] Linking target lib/librte_net.so.24.1 00:03:39.989 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:39.989 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:39.989 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:39.989 [261/268] Linking target lib/librte_security.so.24.1 00:03:39.989 [262/268] Linking target lib/librte_hash.so.24.1 00:03:39.990 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:39.990 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:40.249 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:40.249 [266/268] Linking target lib/librte_power.so.24.1 00:03:40.249 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.249 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:40.508 INFO: autodetecting backend as ninja 00:03:40.508 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:58.599 CC lib/ut_mock/mock.o 00:03:58.599 CC lib/log/log.o 00:03:58.599 CC lib/log/log_flags.o 00:03:58.599 CC lib/log/log_deprecated.o 00:03:58.599 CC lib/ut/ut.o 00:03:58.599 LIB libspdk_log.a 00:03:58.599 LIB libspdk_ut.a 00:03:58.599 LIB libspdk_ut_mock.a 00:03:58.599 SO libspdk_ut.so.2.0 00:03:58.599 SO libspdk_ut_mock.so.6.0 00:03:58.599 SO libspdk_log.so.7.1 00:03:58.599 SYMLINK libspdk_ut_mock.so 00:03:58.599 SYMLINK libspdk_ut.so 00:03:58.599 SYMLINK libspdk_log.so 00:03:58.599 CC lib/dma/dma.o 00:03:58.599 CC lib/util/base64.o 00:03:58.599 CC lib/util/cpuset.o 00:03:58.599 CC lib/util/crc16.o 00:03:58.599 CC lib/util/crc32.o 00:03:58.599 CC lib/util/bit_array.o 00:03:58.599 CC lib/util/crc32c.o 00:03:58.599 CXX lib/trace_parser/trace.o 00:03:58.599 CC lib/ioat/ioat.o 00:03:58.599 CC lib/vfio_user/host/vfio_user_pci.o 00:03:58.599 CC lib/util/crc32_ieee.o 00:03:58.599 CC lib/util/crc64.o 00:03:58.599 CC lib/util/dif.o 00:03:58.600 CC lib/util/fd.o 00:03:58.600 CC lib/util/fd_group.o 00:03:58.600 LIB libspdk_dma.a 00:03:58.600 CC lib/util/file.o 00:03:58.600 SO libspdk_dma.so.5.0 00:03:58.600 CC lib/util/hexlify.o 00:03:58.600 CC lib/util/iov.o 00:03:58.600 CC lib/util/math.o 00:03:58.859 SYMLINK libspdk_dma.so 00:03:58.859 CC lib/vfio_user/host/vfio_user.o 00:03:58.859 LIB libspdk_ioat.a 00:03:58.859 SO libspdk_ioat.so.7.0 00:03:58.859 CC lib/util/net.o 00:03:58.859 CC lib/util/pipe.o 00:03:58.859 SYMLINK libspdk_ioat.so 00:03:58.859 CC lib/util/strerror_tls.o 00:03:58.859 CC lib/util/string.o 00:03:58.859 CC lib/util/uuid.o 00:03:58.859 CC lib/util/xor.o 00:03:58.859 CC lib/util/zipf.o 00:03:58.859 LIB libspdk_vfio_user.a 00:03:58.859 CC lib/util/md5.o 00:03:59.119 SO libspdk_vfio_user.so.5.0 00:03:59.119 SYMLINK libspdk_vfio_user.so 00:03:59.378 LIB libspdk_util.a 00:03:59.378 SO libspdk_util.so.10.1 00:03:59.378 LIB libspdk_trace_parser.a 00:03:59.638 SO libspdk_trace_parser.so.6.0 00:03:59.638 SYMLINK libspdk_util.so 00:03:59.638 SYMLINK libspdk_trace_parser.so 00:03:59.897 CC lib/env_dpdk/env.o 00:03:59.897 CC lib/rdma_utils/rdma_utils.o 00:03:59.897 CC lib/env_dpdk/memory.o 00:03:59.897 CC lib/env_dpdk/init.o 00:03:59.897 CC lib/env_dpdk/pci.o 00:03:59.897 CC lib/env_dpdk/threads.o 00:03:59.897 CC lib/idxd/idxd.o 00:03:59.897 CC lib/vmd/vmd.o 00:03:59.897 CC lib/conf/conf.o 00:03:59.897 CC lib/json/json_parse.o 00:03:59.897 CC lib/env_dpdk/pci_ioat.o 00:03:59.897 LIB libspdk_conf.a 00:04:00.156 SO libspdk_conf.so.6.0 00:04:00.156 LIB libspdk_rdma_utils.a 00:04:00.156 CC lib/json/json_util.o 00:04:00.156 CC lib/json/json_write.o 00:04:00.156 SO libspdk_rdma_utils.so.1.0 00:04:00.156 SYMLINK libspdk_conf.so 00:04:00.156 CC lib/env_dpdk/pci_virtio.o 00:04:00.156 SYMLINK libspdk_rdma_utils.so 00:04:00.156 CC lib/env_dpdk/pci_vmd.o 00:04:00.156 CC lib/env_dpdk/pci_idxd.o 00:04:00.156 CC lib/vmd/led.o 00:04:00.156 CC lib/env_dpdk/pci_event.o 00:04:00.156 CC lib/env_dpdk/sigbus_handler.o 00:04:00.416 CC lib/env_dpdk/pci_dpdk.o 00:04:00.416 LIB libspdk_json.a 00:04:00.416 CC lib/rdma_provider/common.o 00:04:00.416 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.416 SO libspdk_json.so.6.0 00:04:00.416 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:00.416 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.416 CC lib/idxd/idxd_user.o 00:04:00.416 CC lib/idxd/idxd_kernel.o 00:04:00.416 SYMLINK libspdk_json.so 00:04:00.416 LIB libspdk_vmd.a 00:04:00.416 SO libspdk_vmd.so.6.0 00:04:00.675 SYMLINK libspdk_vmd.so 00:04:00.675 LIB libspdk_rdma_provider.a 00:04:00.675 SO libspdk_rdma_provider.so.7.0 00:04:00.675 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.675 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.675 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.675 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.675 LIB libspdk_idxd.a 00:04:00.675 SO libspdk_idxd.so.12.1 00:04:00.675 SYMLINK libspdk_rdma_provider.so 00:04:00.675 SYMLINK libspdk_idxd.so 00:04:00.935 LIB libspdk_jsonrpc.a 00:04:00.935 SO libspdk_jsonrpc.so.6.0 00:04:01.195 SYMLINK libspdk_jsonrpc.so 00:04:01.195 LIB libspdk_env_dpdk.a 00:04:01.454 SO libspdk_env_dpdk.so.15.1 00:04:01.454 CC lib/rpc/rpc.o 00:04:01.454 SYMLINK libspdk_env_dpdk.so 00:04:01.713 LIB libspdk_rpc.a 00:04:01.713 SO libspdk_rpc.so.6.0 00:04:01.971 SYMLINK libspdk_rpc.so 00:04:02.230 CC lib/trace/trace_rpc.o 00:04:02.230 CC lib/trace/trace.o 00:04:02.230 CC lib/trace/trace_flags.o 00:04:02.230 CC lib/notify/notify_rpc.o 00:04:02.230 CC lib/notify/notify.o 00:04:02.230 CC lib/keyring/keyring.o 00:04:02.230 CC lib/keyring/keyring_rpc.o 00:04:02.489 LIB libspdk_notify.a 00:04:02.489 SO libspdk_notify.so.6.0 00:04:02.489 LIB libspdk_keyring.a 00:04:02.489 LIB libspdk_trace.a 00:04:02.489 SO libspdk_keyring.so.2.0 00:04:02.489 SYMLINK libspdk_notify.so 00:04:02.489 SO libspdk_trace.so.11.0 00:04:02.489 SYMLINK libspdk_keyring.so 00:04:02.748 SYMLINK libspdk_trace.so 00:04:03.008 CC lib/thread/thread.o 00:04:03.008 CC lib/thread/iobuf.o 00:04:03.008 CC lib/sock/sock.o 00:04:03.008 CC lib/sock/sock_rpc.o 00:04:03.577 LIB libspdk_sock.a 00:04:03.577 SO libspdk_sock.so.10.0 00:04:03.577 SYMLINK libspdk_sock.so 00:04:04.146 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:04.146 CC lib/nvme/nvme_fabric.o 00:04:04.146 CC lib/nvme/nvme_ctrlr.o 00:04:04.146 CC lib/nvme/nvme_ns_cmd.o 00:04:04.146 CC lib/nvme/nvme_ns.o 00:04:04.146 CC lib/nvme/nvme_pcie_common.o 00:04:04.146 CC lib/nvme/nvme_pcie.o 00:04:04.146 CC lib/nvme/nvme_qpair.o 00:04:04.146 CC lib/nvme/nvme.o 00:04:04.716 CC lib/nvme/nvme_quirks.o 00:04:04.716 CC lib/nvme/nvme_transport.o 00:04:04.716 CC lib/nvme/nvme_discovery.o 00:04:04.716 LIB libspdk_thread.a 00:04:04.716 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.716 SO libspdk_thread.so.11.0 00:04:04.716 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.975 SYMLINK libspdk_thread.so 00:04:04.975 CC lib/nvme/nvme_tcp.o 00:04:04.975 CC lib/nvme/nvme_opal.o 00:04:04.975 CC lib/nvme/nvme_io_msg.o 00:04:04.975 CC lib/nvme/nvme_poll_group.o 00:04:05.246 CC lib/nvme/nvme_zns.o 00:04:05.246 CC lib/nvme/nvme_stubs.o 00:04:05.246 CC lib/nvme/nvme_auth.o 00:04:05.246 CC lib/nvme/nvme_cuse.o 00:04:05.541 CC lib/nvme/nvme_rdma.o 00:04:05.541 CC lib/accel/accel.o 00:04:05.541 CC lib/blob/blobstore.o 00:04:05.541 CC lib/blob/request.o 00:04:05.541 CC lib/blob/zeroes.o 00:04:05.801 CC lib/accel/accel_rpc.o 00:04:05.801 CC lib/accel/accel_sw.o 00:04:05.801 CC lib/blob/blob_bs_dev.o 00:04:06.066 CC lib/init/json_config.o 00:04:06.066 CC lib/init/subsystem.o 00:04:06.066 CC lib/init/subsystem_rpc.o 00:04:06.325 CC lib/init/rpc.o 00:04:06.325 CC lib/virtio/virtio.o 00:04:06.325 CC lib/virtio/virtio_vhost_user.o 00:04:06.325 CC lib/virtio/virtio_vfio_user.o 00:04:06.325 CC lib/virtio/virtio_pci.o 00:04:06.325 CC lib/fsdev/fsdev.o 00:04:06.325 LIB libspdk_init.a 00:04:06.585 SO libspdk_init.so.6.0 00:04:06.585 CC lib/fsdev/fsdev_io.o 00:04:06.585 SYMLINK libspdk_init.so 00:04:06.585 CC lib/fsdev/fsdev_rpc.o 00:04:06.585 LIB libspdk_virtio.a 00:04:06.844 LIB libspdk_accel.a 00:04:06.844 SO libspdk_virtio.so.7.0 00:04:06.844 LIB libspdk_nvme.a 00:04:06.844 CC lib/event/app.o 00:04:06.844 CC lib/event/reactor.o 00:04:06.844 CC lib/event/log_rpc.o 00:04:06.844 CC lib/event/app_rpc.o 00:04:06.844 SO libspdk_accel.so.16.0 00:04:06.844 SYMLINK libspdk_virtio.so 00:04:06.844 CC lib/event/scheduler_static.o 00:04:06.844 SYMLINK libspdk_accel.so 00:04:07.103 LIB libspdk_fsdev.a 00:04:07.103 SO libspdk_nvme.so.15.0 00:04:07.103 SO libspdk_fsdev.so.2.0 00:04:07.103 CC lib/bdev/bdev.o 00:04:07.103 CC lib/bdev/bdev_rpc.o 00:04:07.103 CC lib/bdev/part.o 00:04:07.103 CC lib/bdev/bdev_zone.o 00:04:07.103 CC lib/bdev/scsi_nvme.o 00:04:07.103 SYMLINK libspdk_fsdev.so 00:04:07.362 LIB libspdk_event.a 00:04:07.362 SYMLINK libspdk_nvme.so 00:04:07.362 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:07.362 SO libspdk_event.so.14.0 00:04:07.362 SYMLINK libspdk_event.so 00:04:07.928 LIB libspdk_fuse_dispatcher.a 00:04:08.187 SO libspdk_fuse_dispatcher.so.1.0 00:04:08.187 SYMLINK libspdk_fuse_dispatcher.so 00:04:09.124 LIB libspdk_blob.a 00:04:09.124 SO libspdk_blob.so.11.0 00:04:09.383 SYMLINK libspdk_blob.so 00:04:09.643 CC lib/blobfs/blobfs.o 00:04:09.643 CC lib/blobfs/tree.o 00:04:09.643 CC lib/lvol/lvol.o 00:04:10.210 LIB libspdk_bdev.a 00:04:10.210 SO libspdk_bdev.so.17.0 00:04:10.469 SYMLINK libspdk_bdev.so 00:04:10.469 LIB libspdk_blobfs.a 00:04:10.728 SO libspdk_blobfs.so.10.0 00:04:10.728 CC lib/nbd/nbd.o 00:04:10.728 CC lib/nbd/nbd_rpc.o 00:04:10.728 CC lib/scsi/lun.o 00:04:10.729 CC lib/scsi/dev.o 00:04:10.729 CC lib/scsi/port.o 00:04:10.729 CC lib/ublk/ublk.o 00:04:10.729 CC lib/nvmf/ctrlr.o 00:04:10.729 CC lib/ftl/ftl_core.o 00:04:10.729 SYMLINK libspdk_blobfs.so 00:04:10.729 CC lib/ftl/ftl_init.o 00:04:10.729 LIB libspdk_lvol.a 00:04:10.729 SO libspdk_lvol.so.10.0 00:04:10.729 SYMLINK libspdk_lvol.so 00:04:10.729 CC lib/ftl/ftl_layout.o 00:04:10.729 CC lib/ftl/ftl_debug.o 00:04:10.987 CC lib/ftl/ftl_io.o 00:04:10.987 CC lib/ftl/ftl_sb.o 00:04:10.987 CC lib/scsi/scsi.o 00:04:10.987 CC lib/ublk/ublk_rpc.o 00:04:10.987 CC lib/ftl/ftl_l2p.o 00:04:10.987 CC lib/scsi/scsi_bdev.o 00:04:10.987 CC lib/ftl/ftl_l2p_flat.o 00:04:10.987 CC lib/scsi/scsi_pr.o 00:04:11.247 LIB libspdk_nbd.a 00:04:11.247 CC lib/ftl/ftl_nv_cache.o 00:04:11.247 SO libspdk_nbd.so.7.0 00:04:11.247 CC lib/ftl/ftl_band.o 00:04:11.247 CC lib/ftl/ftl_band_ops.o 00:04:11.247 SYMLINK libspdk_nbd.so 00:04:11.247 CC lib/ftl/ftl_writer.o 00:04:11.247 CC lib/nvmf/ctrlr_discovery.o 00:04:11.247 CC lib/nvmf/ctrlr_bdev.o 00:04:11.247 LIB libspdk_ublk.a 00:04:11.506 SO libspdk_ublk.so.3.0 00:04:11.506 SYMLINK libspdk_ublk.so 00:04:11.506 CC lib/scsi/scsi_rpc.o 00:04:11.506 CC lib/scsi/task.o 00:04:11.506 CC lib/ftl/ftl_rq.o 00:04:11.506 CC lib/ftl/ftl_reloc.o 00:04:11.506 CC lib/ftl/ftl_l2p_cache.o 00:04:11.506 CC lib/ftl/ftl_p2l.o 00:04:11.506 CC lib/ftl/ftl_p2l_log.o 00:04:11.765 CC lib/ftl/mngt/ftl_mngt.o 00:04:11.765 LIB libspdk_scsi.a 00:04:11.765 SO libspdk_scsi.so.9.0 00:04:11.765 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:11.765 SYMLINK libspdk_scsi.so 00:04:11.765 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:11.765 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:12.024 CC lib/nvmf/subsystem.o 00:04:12.025 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:12.025 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:12.025 CC lib/nvmf/nvmf.o 00:04:12.284 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:12.284 CC lib/iscsi/conn.o 00:04:12.284 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:12.284 CC lib/vhost/vhost.o 00:04:12.284 CC lib/iscsi/init_grp.o 00:04:12.284 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:12.284 CC lib/vhost/vhost_rpc.o 00:04:12.284 CC lib/vhost/vhost_scsi.o 00:04:12.543 CC lib/vhost/vhost_blk.o 00:04:12.543 CC lib/vhost/rte_vhost_user.o 00:04:12.543 CC lib/nvmf/nvmf_rpc.o 00:04:12.802 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:12.802 CC lib/iscsi/iscsi.o 00:04:13.061 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:13.061 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:13.061 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:13.061 CC lib/nvmf/transport.o 00:04:13.061 CC lib/nvmf/tcp.o 00:04:13.320 CC lib/nvmf/stubs.o 00:04:13.320 CC lib/ftl/utils/ftl_conf.o 00:04:13.320 CC lib/nvmf/mdns_server.o 00:04:13.320 CC lib/nvmf/rdma.o 00:04:13.320 CC lib/nvmf/auth.o 00:04:13.579 CC lib/ftl/utils/ftl_md.o 00:04:13.579 CC lib/ftl/utils/ftl_mempool.o 00:04:13.579 CC lib/ftl/utils/ftl_bitmap.o 00:04:13.579 LIB libspdk_vhost.a 00:04:13.579 CC lib/ftl/utils/ftl_property.o 00:04:13.838 SO libspdk_vhost.so.8.0 00:04:13.838 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:13.839 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:13.839 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:13.839 SYMLINK libspdk_vhost.so 00:04:13.839 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:14.097 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:14.097 CC lib/iscsi/param.o 00:04:14.097 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:14.097 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:14.097 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:14.097 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:14.097 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:14.377 CC lib/iscsi/portal_grp.o 00:04:14.377 CC lib/iscsi/tgt_node.o 00:04:14.377 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:14.377 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:14.377 CC lib/iscsi/iscsi_subsystem.o 00:04:14.377 CC lib/iscsi/iscsi_rpc.o 00:04:14.377 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:14.638 CC lib/iscsi/task.o 00:04:14.639 CC lib/ftl/base/ftl_base_dev.o 00:04:14.639 CC lib/ftl/base/ftl_base_bdev.o 00:04:14.639 CC lib/ftl/ftl_trace.o 00:04:14.901 LIB libspdk_ftl.a 00:04:14.901 LIB libspdk_iscsi.a 00:04:14.901 SO libspdk_iscsi.so.8.0 00:04:15.160 SO libspdk_ftl.so.9.0 00:04:15.160 SYMLINK libspdk_iscsi.so 00:04:15.420 SYMLINK libspdk_ftl.so 00:04:15.988 LIB libspdk_nvmf.a 00:04:15.988 SO libspdk_nvmf.so.20.0 00:04:16.247 SYMLINK libspdk_nvmf.so 00:04:16.815 CC module/env_dpdk/env_dpdk_rpc.o 00:04:16.815 CC module/accel/error/accel_error.o 00:04:16.815 CC module/blob/bdev/blob_bdev.o 00:04:16.815 CC module/accel/dsa/accel_dsa.o 00:04:16.815 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:16.815 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:16.815 CC module/accel/ioat/accel_ioat.o 00:04:16.815 CC module/keyring/file/keyring.o 00:04:16.815 CC module/fsdev/aio/fsdev_aio.o 00:04:16.815 CC module/sock/posix/posix.o 00:04:16.815 LIB libspdk_env_dpdk_rpc.a 00:04:16.815 SO libspdk_env_dpdk_rpc.so.6.0 00:04:17.074 SYMLINK libspdk_env_dpdk_rpc.so 00:04:17.074 CC module/keyring/file/keyring_rpc.o 00:04:17.074 CC module/accel/dsa/accel_dsa_rpc.o 00:04:17.074 LIB libspdk_scheduler_dpdk_governor.a 00:04:17.074 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:17.074 CC module/accel/error/accel_error_rpc.o 00:04:17.074 LIB libspdk_scheduler_dynamic.a 00:04:17.074 CC module/accel/ioat/accel_ioat_rpc.o 00:04:17.074 SO libspdk_scheduler_dynamic.so.4.0 00:04:17.074 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:17.074 LIB libspdk_blob_bdev.a 00:04:17.074 LIB libspdk_keyring_file.a 00:04:17.074 LIB libspdk_accel_dsa.a 00:04:17.074 SYMLINK libspdk_scheduler_dynamic.so 00:04:17.074 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:17.074 SO libspdk_blob_bdev.so.11.0 00:04:17.074 SO libspdk_keyring_file.so.2.0 00:04:17.074 SO libspdk_accel_dsa.so.5.0 00:04:17.074 LIB libspdk_accel_ioat.a 00:04:17.074 LIB libspdk_accel_error.a 00:04:17.074 SO libspdk_accel_ioat.so.6.0 00:04:17.074 SYMLINK libspdk_keyring_file.so 00:04:17.074 SO libspdk_accel_error.so.2.0 00:04:17.074 SYMLINK libspdk_blob_bdev.so 00:04:17.332 SYMLINK libspdk_accel_dsa.so 00:04:17.332 CC module/fsdev/aio/linux_aio_mgr.o 00:04:17.332 CC module/scheduler/gscheduler/gscheduler.o 00:04:17.332 SYMLINK libspdk_accel_ioat.so 00:04:17.332 SYMLINK libspdk_accel_error.so 00:04:17.332 CC module/accel/iaa/accel_iaa.o 00:04:17.332 CC module/accel/iaa/accel_iaa_rpc.o 00:04:17.332 CC module/keyring/linux/keyring.o 00:04:17.332 LIB libspdk_scheduler_gscheduler.a 00:04:17.332 CC module/keyring/linux/keyring_rpc.o 00:04:17.332 SO libspdk_scheduler_gscheduler.so.4.0 00:04:17.590 LIB libspdk_accel_iaa.a 00:04:17.590 CC module/bdev/error/vbdev_error.o 00:04:17.590 SYMLINK libspdk_scheduler_gscheduler.so 00:04:17.590 CC module/bdev/delay/vbdev_delay.o 00:04:17.590 CC module/bdev/error/vbdev_error_rpc.o 00:04:17.590 CC module/blobfs/bdev/blobfs_bdev.o 00:04:17.590 SO libspdk_accel_iaa.so.3.0 00:04:17.590 LIB libspdk_fsdev_aio.a 00:04:17.590 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:17.590 LIB libspdk_keyring_linux.a 00:04:17.590 SO libspdk_fsdev_aio.so.1.0 00:04:17.590 CC module/bdev/gpt/gpt.o 00:04:17.590 SO libspdk_keyring_linux.so.1.0 00:04:17.590 SYMLINK libspdk_accel_iaa.so 00:04:17.590 LIB libspdk_sock_posix.a 00:04:17.590 SYMLINK libspdk_fsdev_aio.so 00:04:17.590 SYMLINK libspdk_keyring_linux.so 00:04:17.590 CC module/bdev/gpt/vbdev_gpt.o 00:04:17.590 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:17.590 SO libspdk_sock_posix.so.6.0 00:04:17.850 LIB libspdk_blobfs_bdev.a 00:04:17.850 SYMLINK libspdk_sock_posix.so 00:04:17.850 SO libspdk_blobfs_bdev.so.6.0 00:04:17.850 LIB libspdk_bdev_error.a 00:04:17.850 SYMLINK libspdk_blobfs_bdev.so 00:04:17.850 SO libspdk_bdev_error.so.6.0 00:04:17.850 CC module/bdev/lvol/vbdev_lvol.o 00:04:17.850 LIB libspdk_bdev_delay.a 00:04:17.850 CC module/bdev/malloc/bdev_malloc.o 00:04:17.850 SYMLINK libspdk_bdev_error.so 00:04:17.850 CC module/bdev/null/bdev_null.o 00:04:17.850 SO libspdk_bdev_delay.so.6.0 00:04:17.850 CC module/bdev/nvme/bdev_nvme.o 00:04:17.850 LIB libspdk_bdev_gpt.a 00:04:18.109 CC module/bdev/passthru/vbdev_passthru.o 00:04:18.109 SO libspdk_bdev_gpt.so.6.0 00:04:18.109 SYMLINK libspdk_bdev_delay.so 00:04:18.109 CC module/bdev/null/bdev_null_rpc.o 00:04:18.109 CC module/bdev/raid/bdev_raid.o 00:04:18.109 CC module/bdev/split/vbdev_split.o 00:04:18.109 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:18.109 SYMLINK libspdk_bdev_gpt.so 00:04:18.109 CC module/bdev/raid/bdev_raid_rpc.o 00:04:18.109 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:18.109 LIB libspdk_bdev_null.a 00:04:18.368 SO libspdk_bdev_null.so.6.0 00:04:18.368 CC module/bdev/split/vbdev_split_rpc.o 00:04:18.368 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:18.368 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:18.368 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:18.368 SYMLINK libspdk_bdev_null.so 00:04:18.368 LIB libspdk_bdev_split.a 00:04:18.368 LIB libspdk_bdev_passthru.a 00:04:18.368 LIB libspdk_bdev_malloc.a 00:04:18.368 SO libspdk_bdev_split.so.6.0 00:04:18.368 LIB libspdk_bdev_zone_block.a 00:04:18.368 SO libspdk_bdev_passthru.so.6.0 00:04:18.368 SO libspdk_bdev_malloc.so.6.0 00:04:18.627 CC module/bdev/xnvme/bdev_xnvme.o 00:04:18.627 SO libspdk_bdev_zone_block.so.6.0 00:04:18.627 SYMLINK libspdk_bdev_split.so 00:04:18.627 CC module/bdev/raid/bdev_raid_sb.o 00:04:18.627 SYMLINK libspdk_bdev_malloc.so 00:04:18.627 SYMLINK libspdk_bdev_passthru.so 00:04:18.627 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:18.627 CC module/bdev/aio/bdev_aio.o 00:04:18.627 CC module/bdev/nvme/nvme_rpc.o 00:04:18.627 SYMLINK libspdk_bdev_zone_block.so 00:04:18.627 CC module/bdev/nvme/bdev_mdns_client.o 00:04:18.627 CC module/bdev/ftl/bdev_ftl.o 00:04:18.627 LIB libspdk_bdev_lvol.a 00:04:18.627 SO libspdk_bdev_lvol.so.6.0 00:04:18.627 SYMLINK libspdk_bdev_lvol.so 00:04:18.886 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:18.886 CC module/bdev/nvme/vbdev_opal.o 00:04:18.886 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:18.886 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:18.886 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:18.886 CC module/bdev/iscsi/bdev_iscsi.o 00:04:18.886 CC module/bdev/aio/bdev_aio_rpc.o 00:04:18.886 LIB libspdk_bdev_xnvme.a 00:04:18.886 SO libspdk_bdev_xnvme.so.3.0 00:04:19.145 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.145 SYMLINK libspdk_bdev_xnvme.so 00:04:19.145 CC module/bdev/raid/raid0.o 00:04:19.145 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:19.145 LIB libspdk_bdev_aio.a 00:04:19.145 LIB libspdk_bdev_ftl.a 00:04:19.145 SO libspdk_bdev_aio.so.6.0 00:04:19.145 SO libspdk_bdev_ftl.so.6.0 00:04:19.145 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:19.145 SYMLINK libspdk_bdev_aio.so 00:04:19.145 CC module/bdev/raid/raid1.o 00:04:19.145 SYMLINK libspdk_bdev_ftl.so 00:04:19.145 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:19.145 CC module/bdev/raid/concat.o 00:04:19.404 LIB libspdk_bdev_iscsi.a 00:04:19.404 SO libspdk_bdev_iscsi.so.6.0 00:04:19.404 LIB libspdk_bdev_virtio.a 00:04:19.404 LIB libspdk_bdev_raid.a 00:04:19.404 SYMLINK libspdk_bdev_iscsi.so 00:04:19.404 SO libspdk_bdev_virtio.so.6.0 00:04:19.663 SO libspdk_bdev_raid.so.6.0 00:04:19.663 SYMLINK libspdk_bdev_virtio.so 00:04:19.663 SYMLINK libspdk_bdev_raid.so 00:04:21.040 LIB libspdk_bdev_nvme.a 00:04:21.040 SO libspdk_bdev_nvme.so.7.1 00:04:21.040 SYMLINK libspdk_bdev_nvme.so 00:04:21.609 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:21.609 CC module/event/subsystems/iobuf/iobuf.o 00:04:21.609 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:21.609 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:21.609 CC module/event/subsystems/vmd/vmd.o 00:04:21.609 CC module/event/subsystems/keyring/keyring.o 00:04:21.609 CC module/event/subsystems/sock/sock.o 00:04:21.609 CC module/event/subsystems/fsdev/fsdev.o 00:04:21.609 CC module/event/subsystems/scheduler/scheduler.o 00:04:21.868 LIB libspdk_event_keyring.a 00:04:21.868 LIB libspdk_event_fsdev.a 00:04:21.868 LIB libspdk_event_vmd.a 00:04:21.868 LIB libspdk_event_sock.a 00:04:21.868 LIB libspdk_event_scheduler.a 00:04:21.868 LIB libspdk_event_vhost_blk.a 00:04:21.868 LIB libspdk_event_iobuf.a 00:04:21.868 SO libspdk_event_keyring.so.1.0 00:04:21.868 SO libspdk_event_fsdev.so.1.0 00:04:21.868 SO libspdk_event_sock.so.5.0 00:04:21.868 SO libspdk_event_vmd.so.6.0 00:04:21.868 SO libspdk_event_scheduler.so.4.0 00:04:21.868 SO libspdk_event_vhost_blk.so.3.0 00:04:21.868 SO libspdk_event_iobuf.so.3.0 00:04:21.868 SYMLINK libspdk_event_keyring.so 00:04:21.868 SYMLINK libspdk_event_fsdev.so 00:04:21.868 SYMLINK libspdk_event_sock.so 00:04:21.868 SYMLINK libspdk_event_scheduler.so 00:04:21.868 SYMLINK libspdk_event_vmd.so 00:04:21.868 SYMLINK libspdk_event_vhost_blk.so 00:04:21.868 SYMLINK libspdk_event_iobuf.so 00:04:22.436 CC module/event/subsystems/accel/accel.o 00:04:22.436 LIB libspdk_event_accel.a 00:04:22.436 SO libspdk_event_accel.so.6.0 00:04:22.724 SYMLINK libspdk_event_accel.so 00:04:23.007 CC module/event/subsystems/bdev/bdev.o 00:04:23.266 LIB libspdk_event_bdev.a 00:04:23.266 SO libspdk_event_bdev.so.6.0 00:04:23.266 SYMLINK libspdk_event_bdev.so 00:04:23.835 CC module/event/subsystems/ublk/ublk.o 00:04:23.835 CC module/event/subsystems/nbd/nbd.o 00:04:23.835 CC module/event/subsystems/scsi/scsi.o 00:04:23.835 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:23.835 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:23.835 LIB libspdk_event_ublk.a 00:04:23.835 LIB libspdk_event_nbd.a 00:04:23.835 SO libspdk_event_ublk.so.3.0 00:04:23.835 SO libspdk_event_nbd.so.6.0 00:04:23.835 LIB libspdk_event_scsi.a 00:04:23.835 SYMLINK libspdk_event_ublk.so 00:04:24.094 SO libspdk_event_scsi.so.6.0 00:04:24.094 SYMLINK libspdk_event_nbd.so 00:04:24.094 LIB libspdk_event_nvmf.a 00:04:24.094 SO libspdk_event_nvmf.so.6.0 00:04:24.094 SYMLINK libspdk_event_scsi.so 00:04:24.094 SYMLINK libspdk_event_nvmf.so 00:04:24.352 CC module/event/subsystems/iscsi/iscsi.o 00:04:24.352 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:24.611 LIB libspdk_event_iscsi.a 00:04:24.611 LIB libspdk_event_vhost_scsi.a 00:04:24.611 SO libspdk_event_iscsi.so.6.0 00:04:24.611 SO libspdk_event_vhost_scsi.so.3.0 00:04:24.611 SYMLINK libspdk_event_iscsi.so 00:04:24.872 SYMLINK libspdk_event_vhost_scsi.so 00:04:24.872 SO libspdk.so.6.0 00:04:24.872 SYMLINK libspdk.so 00:04:25.439 CXX app/trace/trace.o 00:04:25.440 CC app/trace_record/trace_record.o 00:04:25.440 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:25.440 CC app/iscsi_tgt/iscsi_tgt.o 00:04:25.440 CC app/nvmf_tgt/nvmf_main.o 00:04:25.440 CC app/spdk_tgt/spdk_tgt.o 00:04:25.440 CC test/thread/poller_perf/poller_perf.o 00:04:25.440 CC examples/util/zipf/zipf.o 00:04:25.440 CC examples/ioat/perf/perf.o 00:04:25.440 CC test/dma/test_dma/test_dma.o 00:04:25.440 LINK nvmf_tgt 00:04:25.440 LINK interrupt_tgt 00:04:25.440 LINK iscsi_tgt 00:04:25.440 LINK spdk_tgt 00:04:25.440 LINK poller_perf 00:04:25.440 LINK zipf 00:04:25.440 LINK spdk_trace_record 00:04:25.698 LINK ioat_perf 00:04:25.698 LINK spdk_trace 00:04:25.698 CC examples/ioat/verify/verify.o 00:04:25.698 CC app/spdk_lspci/spdk_lspci.o 00:04:25.698 CC app/spdk_nvme_perf/perf.o 00:04:25.698 CC app/spdk_nvme_identify/identify.o 00:04:25.957 CC app/spdk_nvme_discover/discovery_aer.o 00:04:25.957 CC app/spdk_top/spdk_top.o 00:04:25.957 CC test/app/bdev_svc/bdev_svc.o 00:04:25.957 LINK test_dma 00:04:25.957 CC examples/thread/thread/thread_ex.o 00:04:25.957 LINK spdk_lspci 00:04:25.957 LINK verify 00:04:25.957 CC examples/sock/hello_world/hello_sock.o 00:04:25.957 LINK spdk_nvme_discover 00:04:25.957 LINK bdev_svc 00:04:26.216 LINK thread 00:04:26.216 CC app/spdk_dd/spdk_dd.o 00:04:26.216 LINK hello_sock 00:04:26.216 CC examples/vmd/lsvmd/lsvmd.o 00:04:26.216 CC examples/idxd/perf/perf.o 00:04:26.475 CC app/fio/nvme/fio_plugin.o 00:04:26.475 LINK lsvmd 00:04:26.475 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:26.475 CC app/fio/bdev/fio_plugin.o 00:04:26.735 CC app/vhost/vhost.o 00:04:26.735 LINK spdk_dd 00:04:26.735 LINK idxd_perf 00:04:26.735 CC examples/vmd/led/led.o 00:04:26.735 LINK spdk_nvme_perf 00:04:26.735 LINK spdk_nvme_identify 00:04:26.735 LINK spdk_top 00:04:26.735 LINK vhost 00:04:26.735 LINK led 00:04:26.994 CC test/app/histogram_perf/histogram_perf.o 00:04:26.994 CC test/app/jsoncat/jsoncat.o 00:04:26.994 LINK nvme_fuzz 00:04:26.994 CC examples/accel/perf/accel_perf.o 00:04:26.994 LINK histogram_perf 00:04:26.994 LINK spdk_nvme 00:04:26.994 CC test/app/stub/stub.o 00:04:26.994 LINK spdk_bdev 00:04:26.994 CC examples/blob/hello_world/hello_blob.o 00:04:26.994 LINK jsoncat 00:04:27.253 CC examples/nvme/hello_world/hello_world.o 00:04:27.253 CC examples/blob/cli/blobcli.o 00:04:27.253 TEST_HEADER include/spdk/accel.h 00:04:27.253 LINK stub 00:04:27.253 TEST_HEADER include/spdk/accel_module.h 00:04:27.253 TEST_HEADER include/spdk/assert.h 00:04:27.253 TEST_HEADER include/spdk/barrier.h 00:04:27.253 TEST_HEADER include/spdk/base64.h 00:04:27.253 TEST_HEADER include/spdk/bdev.h 00:04:27.253 TEST_HEADER include/spdk/bdev_module.h 00:04:27.253 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.253 TEST_HEADER include/spdk/bit_array.h 00:04:27.253 TEST_HEADER include/spdk/bit_pool.h 00:04:27.253 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.253 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.253 TEST_HEADER include/spdk/blobfs.h 00:04:27.253 TEST_HEADER include/spdk/blob.h 00:04:27.253 TEST_HEADER include/spdk/conf.h 00:04:27.253 TEST_HEADER include/spdk/config.h 00:04:27.253 TEST_HEADER include/spdk/cpuset.h 00:04:27.253 TEST_HEADER include/spdk/crc16.h 00:04:27.253 TEST_HEADER include/spdk/crc32.h 00:04:27.253 TEST_HEADER include/spdk/crc64.h 00:04:27.253 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.253 TEST_HEADER include/spdk/dif.h 00:04:27.253 TEST_HEADER include/spdk/dma.h 00:04:27.253 TEST_HEADER include/spdk/endian.h 00:04:27.253 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.253 TEST_HEADER include/spdk/env.h 00:04:27.253 TEST_HEADER include/spdk/event.h 00:04:27.253 TEST_HEADER include/spdk/fd_group.h 00:04:27.253 TEST_HEADER include/spdk/fd.h 00:04:27.253 TEST_HEADER include/spdk/file.h 00:04:27.253 TEST_HEADER include/spdk/fsdev.h 00:04:27.253 TEST_HEADER include/spdk/fsdev_module.h 00:04:27.253 TEST_HEADER include/spdk/ftl.h 00:04:27.253 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:27.253 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.253 TEST_HEADER include/spdk/hexlify.h 00:04:27.253 LINK hello_blob 00:04:27.253 TEST_HEADER include/spdk/histogram_data.h 00:04:27.253 TEST_HEADER include/spdk/idxd.h 00:04:27.253 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.253 TEST_HEADER include/spdk/init.h 00:04:27.253 TEST_HEADER include/spdk/ioat.h 00:04:27.253 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.253 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.253 TEST_HEADER include/spdk/json.h 00:04:27.253 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.253 TEST_HEADER include/spdk/keyring.h 00:04:27.253 TEST_HEADER include/spdk/keyring_module.h 00:04:27.253 TEST_HEADER include/spdk/likely.h 00:04:27.253 TEST_HEADER include/spdk/log.h 00:04:27.253 TEST_HEADER include/spdk/lvol.h 00:04:27.253 CC examples/nvme/reconnect/reconnect.o 00:04:27.253 TEST_HEADER include/spdk/md5.h 00:04:27.253 TEST_HEADER include/spdk/memory.h 00:04:27.253 TEST_HEADER include/spdk/mmio.h 00:04:27.253 TEST_HEADER include/spdk/nbd.h 00:04:27.253 TEST_HEADER include/spdk/net.h 00:04:27.253 TEST_HEADER include/spdk/notify.h 00:04:27.253 TEST_HEADER include/spdk/nvme.h 00:04:27.253 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.253 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.253 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.253 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.253 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.253 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.253 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.510 TEST_HEADER include/spdk/nvmf.h 00:04:27.510 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.510 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.510 TEST_HEADER include/spdk/opal.h 00:04:27.510 TEST_HEADER include/spdk/opal_spec.h 00:04:27.510 TEST_HEADER include/spdk/pci_ids.h 00:04:27.510 TEST_HEADER include/spdk/pipe.h 00:04:27.510 TEST_HEADER include/spdk/queue.h 00:04:27.510 TEST_HEADER include/spdk/reduce.h 00:04:27.510 TEST_HEADER include/spdk/rpc.h 00:04:27.510 TEST_HEADER include/spdk/scheduler.h 00:04:27.510 TEST_HEADER include/spdk/scsi.h 00:04:27.510 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.510 TEST_HEADER include/spdk/sock.h 00:04:27.510 TEST_HEADER include/spdk/stdinc.h 00:04:27.510 TEST_HEADER include/spdk/string.h 00:04:27.510 TEST_HEADER include/spdk/thread.h 00:04:27.510 TEST_HEADER include/spdk/trace.h 00:04:27.510 TEST_HEADER include/spdk/trace_parser.h 00:04:27.510 LINK hello_world 00:04:27.510 TEST_HEADER include/spdk/tree.h 00:04:27.510 TEST_HEADER include/spdk/ublk.h 00:04:27.510 TEST_HEADER include/spdk/util.h 00:04:27.510 TEST_HEADER include/spdk/uuid.h 00:04:27.510 TEST_HEADER include/spdk/version.h 00:04:27.510 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.510 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:27.510 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.510 TEST_HEADER include/spdk/vhost.h 00:04:27.510 TEST_HEADER include/spdk/vmd.h 00:04:27.510 TEST_HEADER include/spdk/xor.h 00:04:27.510 TEST_HEADER include/spdk/zipf.h 00:04:27.510 CXX test/cpp_headers/accel.o 00:04:27.510 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.510 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:27.510 CXX test/cpp_headers/accel_module.o 00:04:27.510 LINK accel_perf 00:04:27.769 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:27.769 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:27.769 LINK hello_fsdev 00:04:27.769 CC examples/nvme/arbitration/arbitration.o 00:04:27.769 LINK reconnect 00:04:27.769 CXX test/cpp_headers/assert.o 00:04:27.769 LINK blobcli 00:04:27.769 CC examples/nvme/hotplug/hotplug.o 00:04:28.028 CXX test/cpp_headers/barrier.o 00:04:28.028 LINK mem_callbacks 00:04:28.028 CXX test/cpp_headers/base64.o 00:04:28.028 LINK hotplug 00:04:28.028 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:28.028 LINK arbitration 00:04:28.028 CC examples/nvme/abort/abort.o 00:04:28.028 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:28.028 LINK vhost_fuzz 00:04:28.287 CXX test/cpp_headers/bdev.o 00:04:28.287 CC test/env/vtophys/vtophys.o 00:04:28.287 LINK nvme_manage 00:04:28.287 CXX test/cpp_headers/bdev_module.o 00:04:28.287 CXX test/cpp_headers/bdev_zone.o 00:04:28.287 LINK cmb_copy 00:04:28.287 LINK pmr_persistence 00:04:28.287 LINK vtophys 00:04:28.287 CXX test/cpp_headers/bit_array.o 00:04:28.546 CC examples/bdev/hello_world/hello_bdev.o 00:04:28.546 CXX test/cpp_headers/bit_pool.o 00:04:28.546 CXX test/cpp_headers/blob_bdev.o 00:04:28.546 LINK abort 00:04:28.546 CC examples/bdev/bdevperf/bdevperf.o 00:04:28.546 CXX test/cpp_headers/blobfs_bdev.o 00:04:28.546 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.546 CC test/event/event_perf/event_perf.o 00:04:28.546 CC test/event/reactor/reactor.o 00:04:28.806 LINK hello_bdev 00:04:28.806 LINK env_dpdk_post_init 00:04:28.806 CC test/event/reactor_perf/reactor_perf.o 00:04:28.806 CXX test/cpp_headers/blobfs.o 00:04:28.806 CC test/event/app_repeat/app_repeat.o 00:04:28.806 LINK event_perf 00:04:28.806 CC test/event/scheduler/scheduler.o 00:04:28.806 LINK reactor 00:04:28.806 LINK reactor_perf 00:04:28.806 CXX test/cpp_headers/blob.o 00:04:28.806 LINK app_repeat 00:04:29.066 CXX test/cpp_headers/conf.o 00:04:29.066 CXX test/cpp_headers/config.o 00:04:29.066 CC test/env/memory/memory_ut.o 00:04:29.066 LINK scheduler 00:04:29.066 CC test/rpc_client/rpc_client_test.o 00:04:29.066 CC test/nvme/aer/aer.o 00:04:29.066 CXX test/cpp_headers/cpuset.o 00:04:29.066 LINK iscsi_fuzz 00:04:29.066 CC test/env/pci/pci_ut.o 00:04:29.325 CC test/accel/dif/dif.o 00:04:29.325 LINK rpc_client_test 00:04:29.325 CXX test/cpp_headers/crc16.o 00:04:29.325 CC test/blobfs/mkfs/mkfs.o 00:04:29.325 CXX test/cpp_headers/crc32.o 00:04:29.325 CC test/nvme/reset/reset.o 00:04:29.325 LINK aer 00:04:29.325 CXX test/cpp_headers/crc64.o 00:04:29.325 LINK bdevperf 00:04:29.325 LINK mkfs 00:04:29.584 CC test/nvme/sgl/sgl.o 00:04:29.584 CXX test/cpp_headers/dif.o 00:04:29.584 CC test/nvme/e2edp/nvme_dp.o 00:04:29.584 LINK pci_ut 00:04:29.584 LINK reset 00:04:29.584 CC test/nvme/overhead/overhead.o 00:04:29.584 CXX test/cpp_headers/dma.o 00:04:29.843 CC examples/nvmf/nvmf/nvmf.o 00:04:29.843 LINK sgl 00:04:29.843 CC test/nvme/err_injection/err_injection.o 00:04:29.843 LINK nvme_dp 00:04:29.843 CXX test/cpp_headers/endian.o 00:04:29.843 CC test/lvol/esnap/esnap.o 00:04:29.843 CC test/nvme/startup/startup.o 00:04:29.843 LINK overhead 00:04:29.843 LINK dif 00:04:30.102 CXX test/cpp_headers/env_dpdk.o 00:04:30.102 CC test/nvme/reserve/reserve.o 00:04:30.102 LINK err_injection 00:04:30.102 LINK startup 00:04:30.102 CC test/nvme/simple_copy/simple_copy.o 00:04:30.102 LINK nvmf 00:04:30.102 LINK memory_ut 00:04:30.102 CC test/nvme/connect_stress/connect_stress.o 00:04:30.102 CXX test/cpp_headers/env.o 00:04:30.102 CC test/nvme/boot_partition/boot_partition.o 00:04:30.362 LINK reserve 00:04:30.362 CC test/nvme/compliance/nvme_compliance.o 00:04:30.362 CC test/nvme/fused_ordering/fused_ordering.o 00:04:30.362 LINK simple_copy 00:04:30.362 CXX test/cpp_headers/event.o 00:04:30.362 LINK connect_stress 00:04:30.362 LINK boot_partition 00:04:30.362 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:30.362 CC test/nvme/fdp/fdp.o 00:04:30.621 CC test/nvme/cuse/cuse.o 00:04:30.621 CXX test/cpp_headers/fd_group.o 00:04:30.621 CXX test/cpp_headers/fd.o 00:04:30.621 LINK fused_ordering 00:04:30.621 CXX test/cpp_headers/file.o 00:04:30.621 LINK doorbell_aers 00:04:30.621 LINK nvme_compliance 00:04:30.621 CXX test/cpp_headers/fsdev.o 00:04:30.621 CXX test/cpp_headers/fsdev_module.o 00:04:30.621 CXX test/cpp_headers/ftl.o 00:04:30.621 CXX test/cpp_headers/fuse_dispatcher.o 00:04:30.621 CC test/bdev/bdevio/bdevio.o 00:04:30.621 CXX test/cpp_headers/gpt_spec.o 00:04:30.621 LINK fdp 00:04:30.905 CXX test/cpp_headers/hexlify.o 00:04:30.905 CXX test/cpp_headers/histogram_data.o 00:04:30.905 CXX test/cpp_headers/idxd.o 00:04:30.905 CXX test/cpp_headers/idxd_spec.o 00:04:30.905 CXX test/cpp_headers/init.o 00:04:30.905 CXX test/cpp_headers/ioat.o 00:04:30.905 CXX test/cpp_headers/ioat_spec.o 00:04:30.905 CXX test/cpp_headers/iscsi_spec.o 00:04:30.905 CXX test/cpp_headers/json.o 00:04:30.905 CXX test/cpp_headers/jsonrpc.o 00:04:31.178 CXX test/cpp_headers/keyring.o 00:04:31.178 CXX test/cpp_headers/keyring_module.o 00:04:31.178 CXX test/cpp_headers/likely.o 00:04:31.178 CXX test/cpp_headers/log.o 00:04:31.178 LINK bdevio 00:04:31.178 CXX test/cpp_headers/lvol.o 00:04:31.178 CXX test/cpp_headers/md5.o 00:04:31.178 CXX test/cpp_headers/memory.o 00:04:31.178 CXX test/cpp_headers/mmio.o 00:04:31.178 CXX test/cpp_headers/nbd.o 00:04:31.178 CXX test/cpp_headers/net.o 00:04:31.178 CXX test/cpp_headers/notify.o 00:04:31.178 CXX test/cpp_headers/nvme.o 00:04:31.178 CXX test/cpp_headers/nvme_intel.o 00:04:31.178 CXX test/cpp_headers/nvme_ocssd.o 00:04:31.437 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:31.437 CXX test/cpp_headers/nvme_spec.o 00:04:31.437 CXX test/cpp_headers/nvme_zns.o 00:04:31.437 CXX test/cpp_headers/nvmf_cmd.o 00:04:31.437 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.437 CXX test/cpp_headers/nvmf.o 00:04:31.437 CXX test/cpp_headers/nvmf_spec.o 00:04:31.437 CXX test/cpp_headers/nvmf_transport.o 00:04:31.437 CXX test/cpp_headers/opal.o 00:04:31.437 CXX test/cpp_headers/opal_spec.o 00:04:31.696 CXX test/cpp_headers/pci_ids.o 00:04:31.696 CXX test/cpp_headers/pipe.o 00:04:31.696 CXX test/cpp_headers/queue.o 00:04:31.696 CXX test/cpp_headers/reduce.o 00:04:31.696 CXX test/cpp_headers/rpc.o 00:04:31.696 CXX test/cpp_headers/scheduler.o 00:04:31.696 CXX test/cpp_headers/scsi_spec.o 00:04:31.696 CXX test/cpp_headers/scsi.o 00:04:31.696 CXX test/cpp_headers/sock.o 00:04:31.696 CXX test/cpp_headers/stdinc.o 00:04:31.696 CXX test/cpp_headers/string.o 00:04:31.696 LINK cuse 00:04:31.956 CXX test/cpp_headers/thread.o 00:04:31.956 CXX test/cpp_headers/trace.o 00:04:31.956 CXX test/cpp_headers/trace_parser.o 00:04:31.956 CXX test/cpp_headers/tree.o 00:04:31.956 CXX test/cpp_headers/ublk.o 00:04:31.956 CXX test/cpp_headers/util.o 00:04:31.956 CXX test/cpp_headers/uuid.o 00:04:31.956 CXX test/cpp_headers/version.o 00:04:31.956 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.956 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.956 CXX test/cpp_headers/vhost.o 00:04:31.956 CXX test/cpp_headers/vmd.o 00:04:31.956 CXX test/cpp_headers/xor.o 00:04:31.956 CXX test/cpp_headers/zipf.o 00:04:36.149 LINK esnap 00:04:36.149 00:04:36.149 real 1m25.786s 00:04:36.149 user 7m9.019s 00:04:36.149 sys 1m54.328s 00:04:36.149 ************************************ 00:04:36.149 END TEST make 00:04:36.149 ************************************ 00:04:36.149 11:08:13 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:36.149 11:08:13 make -- common/autotest_common.sh@10 -- $ set +x 00:04:36.149 11:08:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:36.149 11:08:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:36.149 11:08:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:36.149 11:08:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.149 11:08:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:36.149 11:08:13 -- pm/common@44 -- $ pid=5287 00:04:36.149 11:08:13 -- pm/common@50 -- $ kill -TERM 5287 00:04:36.149 11:08:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.149 11:08:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:36.409 11:08:13 -- pm/common@44 -- $ pid=5289 00:04:36.409 11:08:13 -- pm/common@50 -- $ kill -TERM 5289 00:04:36.409 11:08:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:36.409 11:08:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:36.409 11:08:13 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.409 11:08:13 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.409 11:08:13 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.409 11:08:13 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.409 11:08:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.409 11:08:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.409 11:08:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.409 11:08:13 -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.409 11:08:13 -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.409 11:08:13 -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.409 11:08:13 -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.409 11:08:13 -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.409 11:08:13 -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.409 11:08:13 -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.409 11:08:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.409 11:08:13 -- scripts/common.sh@344 -- # case "$op" in 00:04:36.409 11:08:13 -- scripts/common.sh@345 -- # : 1 00:04:36.409 11:08:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.409 11:08:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.409 11:08:13 -- scripts/common.sh@365 -- # decimal 1 00:04:36.409 11:08:13 -- scripts/common.sh@353 -- # local d=1 00:04:36.409 11:08:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.409 11:08:13 -- scripts/common.sh@355 -- # echo 1 00:04:36.409 11:08:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.409 11:08:13 -- scripts/common.sh@366 -- # decimal 2 00:04:36.409 11:08:13 -- scripts/common.sh@353 -- # local d=2 00:04:36.409 11:08:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.409 11:08:13 -- scripts/common.sh@355 -- # echo 2 00:04:36.409 11:08:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.409 11:08:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.409 11:08:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.409 11:08:13 -- scripts/common.sh@368 -- # return 0 00:04:36.409 11:08:13 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.409 11:08:13 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.409 --rc genhtml_branch_coverage=1 00:04:36.409 --rc genhtml_function_coverage=1 00:04:36.409 --rc genhtml_legend=1 00:04:36.409 --rc geninfo_all_blocks=1 00:04:36.409 --rc geninfo_unexecuted_blocks=1 00:04:36.409 00:04:36.409 ' 00:04:36.409 11:08:13 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.409 --rc genhtml_branch_coverage=1 00:04:36.409 --rc genhtml_function_coverage=1 00:04:36.409 --rc genhtml_legend=1 00:04:36.409 --rc geninfo_all_blocks=1 00:04:36.409 --rc geninfo_unexecuted_blocks=1 00:04:36.409 00:04:36.409 ' 00:04:36.409 11:08:13 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.409 --rc genhtml_branch_coverage=1 00:04:36.409 --rc genhtml_function_coverage=1 00:04:36.409 --rc genhtml_legend=1 00:04:36.409 --rc geninfo_all_blocks=1 00:04:36.409 --rc geninfo_unexecuted_blocks=1 00:04:36.409 00:04:36.409 ' 00:04:36.409 11:08:13 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.409 --rc genhtml_branch_coverage=1 00:04:36.409 --rc genhtml_function_coverage=1 00:04:36.409 --rc genhtml_legend=1 00:04:36.409 --rc geninfo_all_blocks=1 00:04:36.409 --rc geninfo_unexecuted_blocks=1 00:04:36.409 00:04:36.409 ' 00:04:36.409 11:08:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.409 11:08:13 -- nvmf/common.sh@7 -- # uname -s 00:04:36.409 11:08:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.409 11:08:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.409 11:08:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.409 11:08:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.409 11:08:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.409 11:08:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.409 11:08:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.409 11:08:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.409 11:08:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.409 11:08:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.409 11:08:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ebaabc51-4779-460f-bf0c-937daf1be927 00:04:36.409 11:08:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=ebaabc51-4779-460f-bf0c-937daf1be927 00:04:36.409 11:08:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.409 11:08:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.409 11:08:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.409 11:08:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.409 11:08:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.668 11:08:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.668 11:08:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.668 11:08:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.668 11:08:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.668 11:08:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.668 11:08:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.668 11:08:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.668 11:08:13 -- paths/export.sh@5 -- # export PATH 00:04:36.668 11:08:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.669 11:08:13 -- nvmf/common.sh@51 -- # : 0 00:04:36.669 11:08:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.669 11:08:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.669 11:08:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.669 11:08:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.669 11:08:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.669 11:08:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.669 11:08:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.669 11:08:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.669 11:08:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.669 11:08:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:36.669 11:08:13 -- spdk/autotest.sh@32 -- # uname -s 00:04:36.669 11:08:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:36.669 11:08:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:36.669 11:08:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:36.669 11:08:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:36.669 11:08:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:36.669 11:08:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:36.669 11:08:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:36.669 11:08:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:36.669 11:08:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54779 00:04:36.669 11:08:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:36.669 11:08:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:36.669 11:08:13 -- pm/common@17 -- # local monitor 00:04:36.669 11:08:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.669 11:08:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:36.669 11:08:13 -- pm/common@25 -- # sleep 1 00:04:36.669 11:08:13 -- pm/common@21 -- # date +%s 00:04:36.669 11:08:13 -- pm/common@21 -- # date +%s 00:04:36.669 11:08:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731668893 00:04:36.669 11:08:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731668893 00:04:36.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731668893_collect-cpu-load.pm.log 00:04:36.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731668893_collect-vmstat.pm.log 00:04:37.606 11:08:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:37.606 11:08:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:37.606 11:08:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.606 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.606 11:08:14 -- spdk/autotest.sh@59 -- # create_test_list 00:04:37.606 11:08:14 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:37.606 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.606 11:08:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:37.606 11:08:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:37.606 11:08:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:37.606 11:08:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:37.606 11:08:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:37.606 11:08:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:37.606 11:08:14 -- common/autotest_common.sh@1455 -- # uname 00:04:37.606 11:08:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:37.606 11:08:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:37.606 11:08:14 -- common/autotest_common.sh@1475 -- # uname 00:04:37.606 11:08:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:37.606 11:08:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:37.606 11:08:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:37.864 lcov: LCOV version 1.15 00:04:37.864 11:08:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:52.806 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:52.806 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:07.684 11:08:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:07.684 11:08:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.684 11:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.684 11:08:44 -- spdk/autotest.sh@78 -- # rm -f 00:05:07.684 11:08:44 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.186 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:09.186 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:09.186 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:09.186 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:09.186 11:08:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:09.186 11:08:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:09.186 11:08:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:09.186 11:08:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:09.186 11:08:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.186 11:08:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.186 11:08:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.186 11:08:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2c2n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1648 -- # local device=nvme2c2n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.186 11:08:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.186 11:08:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:09.186 11:08:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.186 11:08:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.186 11:08:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:05:09.186 11:08:46 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:05:09.187 11:08:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:05:09.187 11:08:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.187 11:08:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:09.187 11:08:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:05:09.187 11:08:46 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:05:09.187 11:08:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:05:09.187 11:08:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:09.187 11:08:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:09.187 11:08:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.187 11:08:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.187 11:08:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:09.187 11:08:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:09.187 11:08:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:09.187 No valid GPT data, bailing 00:05:09.187 11:08:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.187 11:08:46 -- scripts/common.sh@394 -- # pt= 00:05:09.187 11:08:46 -- scripts/common.sh@395 -- # return 1 00:05:09.187 11:08:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:09.187 1+0 records in 00:05:09.187 1+0 records out 00:05:09.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00624772 s, 168 MB/s 00:05:09.187 11:08:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.187 11:08:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.187 11:08:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:09.187 11:08:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:09.187 11:08:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:09.187 No valid GPT data, bailing 00:05:09.187 11:08:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:09.187 11:08:46 -- scripts/common.sh@394 -- # pt= 00:05:09.187 11:08:46 -- scripts/common.sh@395 -- # return 1 00:05:09.187 11:08:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:09.187 1+0 records in 00:05:09.187 1+0 records out 00:05:09.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151747 s, 69.1 MB/s 00:05:09.187 11:08:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.187 11:08:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.187 11:08:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:09.187 11:08:46 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:09.187 11:08:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:09.445 No valid GPT data, bailing 00:05:09.445 11:08:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:09.445 11:08:46 -- scripts/common.sh@394 -- # pt= 00:05:09.445 11:08:46 -- scripts/common.sh@395 -- # return 1 00:05:09.445 11:08:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:09.445 1+0 records in 00:05:09.445 1+0 records out 00:05:09.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587896 s, 178 MB/s 00:05:09.445 11:08:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.445 11:08:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.445 11:08:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:09.445 11:08:46 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:09.445 11:08:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:09.445 No valid GPT data, bailing 00:05:09.445 11:08:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:09.446 11:08:46 -- scripts/common.sh@394 -- # pt= 00:05:09.446 11:08:46 -- scripts/common.sh@395 -- # return 1 00:05:09.446 11:08:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:09.446 1+0 records in 00:05:09.446 1+0 records out 00:05:09.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589237 s, 178 MB/s 00:05:09.446 11:08:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.446 11:08:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.446 11:08:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:05:09.446 11:08:46 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:05:09.446 11:08:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:05:09.446 No valid GPT data, bailing 00:05:09.446 11:08:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:05:09.446 11:08:46 -- scripts/common.sh@394 -- # pt= 00:05:09.446 11:08:46 -- scripts/common.sh@395 -- # return 1 00:05:09.446 11:08:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:05:09.446 1+0 records in 00:05:09.446 1+0 records out 00:05:09.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00362939 s, 289 MB/s 00:05:09.446 11:08:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.446 11:08:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.446 11:08:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:05:09.446 11:08:46 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:05:09.446 11:08:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:05:09.704 No valid GPT data, bailing 00:05:09.704 11:08:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:05:09.704 11:08:46 -- scripts/common.sh@394 -- # pt= 00:05:09.704 11:08:46 -- scripts/common.sh@395 -- # return 1 00:05:09.704 11:08:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:05:09.704 1+0 records in 00:05:09.704 1+0 records out 00:05:09.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534989 s, 196 MB/s 00:05:09.704 11:08:46 -- spdk/autotest.sh@105 -- # sync 00:05:09.704 11:08:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:09.704 11:08:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:09.704 11:08:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:12.990 11:08:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:12.990 11:08:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:12.990 11:08:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:12.990 11:08:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:13.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.816 Hugepages 00:05:13.816 node hugesize free / total 00:05:13.816 node0 1048576kB 0 / 0 00:05:13.816 node0 2048kB 0 / 0 00:05:13.816 00:05:13.816 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.074 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:14.074 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:14.333 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:14.333 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:05:14.592 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:14.592 11:08:51 -- spdk/autotest.sh@117 -- # uname -s 00:05:14.592 11:08:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:14.592 11:08:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:14.592 11:08:51 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.095 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.095 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.095 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.095 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.095 11:08:53 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:17.473 11:08:54 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:17.473 11:08:54 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:17.473 11:08:54 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:17.473 11:08:54 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:17.473 11:08:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:17.473 11:08:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:17.473 11:08:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.473 11:08:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:17.473 11:08:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:17.473 11:08:54 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:17.473 11:08:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:17.473 11:08:54 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.299 Waiting for block devices as requested 00:05:18.299 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.300 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.558 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.558 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.827 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:23.827 11:09:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:23.827 11:09:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:23.827 11:09:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.827 11:09:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:23.827 11:09:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.827 11:09:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:23.827 11:09:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.827 11:09:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:23.827 11:09:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:23.827 11:09:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:23.827 11:09:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:23.827 11:09:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:23.827 11:09:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:23.827 11:09:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:23.827 11:09:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:23.827 11:09:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:23.827 11:09:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:23.827 11:09:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:23.827 11:09:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:23.827 11:09:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:23.828 11:09:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:23.828 11:09:00 -- common/autotest_common.sh@1541 -- # continue 00:05:23.828 11:09:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:23.828 11:09:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:23.828 11:09:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.828 11:09:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:23.828 11:09:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.828 11:09:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:23.828 11:09:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.828 11:09:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:23.828 11:09:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:23.828 11:09:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:23.828 11:09:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:23.828 11:09:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:23.828 11:09:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:23.828 11:09:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:23.828 11:09:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:23.828 11:09:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1541 -- # continue 00:05:23.828 11:09:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:23.828 11:09:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:23.828 11:09:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:23.828 11:09:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:23.828 11:09:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:23.828 11:09:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:23.828 11:09:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1541 -- # continue 00:05:23.828 11:09:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:23.828 11:09:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:23.828 11:09:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:23.828 11:09:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:23.828 11:09:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:23.828 11:09:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:23.828 11:09:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:23.828 11:09:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:23.828 11:09:01 -- common/autotest_common.sh@1541 -- # continue 00:05:23.828 11:09:01 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:23.828 11:09:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.828 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.828 11:09:01 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:23.828 11:09:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.828 11:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.828 11:09:01 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.348 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.348 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.348 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.606 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.606 11:09:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:25.606 11:09:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.606 11:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.606 11:09:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:25.606 11:09:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:25.606 11:09:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:25.606 11:09:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:25.606 11:09:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:25.606 11:09:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:25.606 11:09:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:25.606 11:09:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:25.606 11:09:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:25.606 11:09:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:25.606 11:09:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.606 11:09:02 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.606 11:09:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:25.865 11:09:03 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:25.865 11:09:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:25.865 11:09:03 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:25.865 11:09:03 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.865 11:09:03 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:25.865 11:09:03 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.865 11:09:03 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:25.865 11:09:03 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.865 11:09:03 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:25.865 11:09:03 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:25.865 11:09:03 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.865 11:09:03 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:25.865 11:09:03 -- common/autotest_common.sh@1570 -- # return 0 00:05:25.865 11:09:03 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:25.865 11:09:03 -- common/autotest_common.sh@1578 -- # return 0 00:05:25.865 11:09:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:25.865 11:09:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:25.865 11:09:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:25.865 11:09:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:25.865 11:09:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:25.865 11:09:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.865 11:09:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.865 11:09:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:25.865 11:09:03 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:25.865 11:09:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.865 11:09:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.865 11:09:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.865 ************************************ 00:05:25.865 START TEST env 00:05:25.865 ************************************ 00:05:25.865 11:09:03 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.123 * Looking for test storage... 00:05:26.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:26.123 11:09:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.123 11:09:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.123 11:09:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.123 11:09:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.123 11:09:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.123 11:09:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.123 11:09:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.123 11:09:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.123 11:09:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.123 11:09:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.123 11:09:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.123 11:09:03 env -- scripts/common.sh@344 -- # case "$op" in 00:05:26.123 11:09:03 env -- scripts/common.sh@345 -- # : 1 00:05:26.123 11:09:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.123 11:09:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.123 11:09:03 env -- scripts/common.sh@365 -- # decimal 1 00:05:26.123 11:09:03 env -- scripts/common.sh@353 -- # local d=1 00:05:26.123 11:09:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.123 11:09:03 env -- scripts/common.sh@355 -- # echo 1 00:05:26.123 11:09:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.123 11:09:03 env -- scripts/common.sh@366 -- # decimal 2 00:05:26.123 11:09:03 env -- scripts/common.sh@353 -- # local d=2 00:05:26.123 11:09:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.123 11:09:03 env -- scripts/common.sh@355 -- # echo 2 00:05:26.123 11:09:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.123 11:09:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.123 11:09:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.123 11:09:03 env -- scripts/common.sh@368 -- # return 0 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:26.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.123 --rc genhtml_branch_coverage=1 00:05:26.123 --rc genhtml_function_coverage=1 00:05:26.123 --rc genhtml_legend=1 00:05:26.123 --rc geninfo_all_blocks=1 00:05:26.123 --rc geninfo_unexecuted_blocks=1 00:05:26.123 00:05:26.123 ' 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:26.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.123 --rc genhtml_branch_coverage=1 00:05:26.123 --rc genhtml_function_coverage=1 00:05:26.123 --rc genhtml_legend=1 00:05:26.123 --rc geninfo_all_blocks=1 00:05:26.123 --rc geninfo_unexecuted_blocks=1 00:05:26.123 00:05:26.123 ' 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:26.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.123 --rc genhtml_branch_coverage=1 00:05:26.123 --rc genhtml_function_coverage=1 00:05:26.123 --rc genhtml_legend=1 00:05:26.123 --rc geninfo_all_blocks=1 00:05:26.123 --rc geninfo_unexecuted_blocks=1 00:05:26.123 00:05:26.123 ' 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:26.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.123 --rc genhtml_branch_coverage=1 00:05:26.123 --rc genhtml_function_coverage=1 00:05:26.123 --rc genhtml_legend=1 00:05:26.123 --rc geninfo_all_blocks=1 00:05:26.123 --rc geninfo_unexecuted_blocks=1 00:05:26.123 00:05:26.123 ' 00:05:26.123 11:09:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.123 11:09:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.123 11:09:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.123 ************************************ 00:05:26.123 START TEST env_memory 00:05:26.123 ************************************ 00:05:26.123 11:09:03 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.123 00:05:26.123 00:05:26.123 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.123 http://cunit.sourceforge.net/ 00:05:26.123 00:05:26.123 00:05:26.123 Suite: memory 00:05:26.123 Test: alloc and free memory map ...[2024-11-15 11:09:03.489578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:26.382 passed 00:05:26.382 Test: mem map translation ...[2024-11-15 11:09:03.534342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:26.382 [2024-11-15 11:09:03.534493] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:26.382 [2024-11-15 11:09:03.534689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:26.382 [2024-11-15 11:09:03.534764] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:26.382 passed 00:05:26.382 Test: mem map registration ...[2024-11-15 11:09:03.603064] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:26.382 [2024-11-15 11:09:03.603214] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:26.382 passed 00:05:26.382 Test: mem map adjacent registrations ...passed 00:05:26.382 00:05:26.382 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.382 suites 1 1 n/a 0 0 00:05:26.382 tests 4 4 4 0 0 00:05:26.382 asserts 152 152 152 0 n/a 00:05:26.382 00:05:26.382 Elapsed time = 0.247 seconds 00:05:26.382 ************************************ 00:05:26.382 END TEST env_memory 00:05:26.382 ************************************ 00:05:26.382 00:05:26.382 real 0m0.301s 00:05:26.382 user 0m0.263s 00:05:26.382 sys 0m0.026s 00:05:26.382 11:09:03 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.382 11:09:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:26.382 11:09:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:26.382 11:09:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.382 11:09:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.382 11:09:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.382 ************************************ 00:05:26.382 START TEST env_vtophys 00:05:26.382 ************************************ 00:05:26.382 11:09:03 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:26.641 EAL: lib.eal log level changed from notice to debug 00:05:26.641 EAL: Detected lcore 0 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 1 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 2 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 3 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 4 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 5 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 6 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 7 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 8 as core 0 on socket 0 00:05:26.641 EAL: Detected lcore 9 as core 0 on socket 0 00:05:26.641 EAL: Maximum logical cores by configuration: 128 00:05:26.641 EAL: Detected CPU lcores: 10 00:05:26.641 EAL: Detected NUMA nodes: 1 00:05:26.641 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:26.641 EAL: Detected shared linkage of DPDK 00:05:26.641 EAL: No shared files mode enabled, IPC will be disabled 00:05:26.641 EAL: Selected IOVA mode 'PA' 00:05:26.641 EAL: Probing VFIO support... 00:05:26.641 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:26.641 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:26.641 EAL: Ask a virtual area of 0x2e000 bytes 00:05:26.641 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:26.641 EAL: Setting up physically contiguous memory... 00:05:26.641 EAL: Setting maximum number of open files to 524288 00:05:26.641 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:26.641 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:26.641 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.641 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:26.641 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.641 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.641 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:26.641 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:26.641 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.641 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:26.641 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.641 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.642 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:26.642 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:26.642 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.642 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:26.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.642 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.642 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:26.642 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:26.642 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.642 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:26.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.642 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.642 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:26.642 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:26.642 EAL: Hugepages will be freed exactly as allocated. 00:05:26.642 EAL: No shared files mode enabled, IPC is disabled 00:05:26.642 EAL: No shared files mode enabled, IPC is disabled 00:05:26.642 EAL: TSC frequency is ~2490000 KHz 00:05:26.642 EAL: Main lcore 0 is ready (tid=7f6ca4ef2a40;cpuset=[0]) 00:05:26.642 EAL: Trying to obtain current memory policy. 00:05:26.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.642 EAL: Restoring previous memory policy: 0 00:05:26.642 EAL: request: mp_malloc_sync 00:05:26.642 EAL: No shared files mode enabled, IPC is disabled 00:05:26.642 EAL: Heap on socket 0 was expanded by 2MB 00:05:26.642 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:26.642 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:26.642 EAL: Mem event callback 'spdk:(nil)' registered 00:05:26.642 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:26.642 00:05:26.642 00:05:26.642 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.642 http://cunit.sourceforge.net/ 00:05:26.642 00:05:26.642 00:05:26.642 Suite: components_suite 00:05:27.209 Test: vtophys_malloc_test ...passed 00:05:27.209 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.209 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.209 EAL: Restoring previous memory policy: 4 00:05:27.209 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.209 EAL: request: mp_malloc_sync 00:05:27.209 EAL: No shared files mode enabled, IPC is disabled 00:05:27.209 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.209 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.209 EAL: request: mp_malloc_sync 00:05:27.209 EAL: No shared files mode enabled, IPC is disabled 00:05:27.209 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.469 EAL: Trying to obtain current memory policy. 00:05:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.469 EAL: Restoring previous memory policy: 4 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.469 EAL: Trying to obtain current memory policy. 00:05:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.469 EAL: Restoring previous memory policy: 4 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.469 EAL: Trying to obtain current memory policy. 00:05:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.469 EAL: Restoring previous memory policy: 4 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.469 EAL: Trying to obtain current memory policy. 00:05:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.469 EAL: Restoring previous memory policy: 4 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.469 EAL: request: mp_malloc_sync 00:05:27.469 EAL: No shared files mode enabled, IPC is disabled 00:05:27.469 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.469 EAL: Trying to obtain current memory policy. 00:05:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.728 EAL: Restoring previous memory policy: 4 00:05:27.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.728 EAL: request: mp_malloc_sync 00:05:27.728 EAL: No shared files mode enabled, IPC is disabled 00:05:27.728 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.728 EAL: request: mp_malloc_sync 00:05:27.728 EAL: No shared files mode enabled, IPC is disabled 00:05:27.728 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.986 EAL: Trying to obtain current memory policy. 00:05:27.986 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.986 EAL: Restoring previous memory policy: 4 00:05:27.986 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.986 EAL: request: mp_malloc_sync 00:05:27.986 EAL: No shared files mode enabled, IPC is disabled 00:05:27.986 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.244 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.244 EAL: request: mp_malloc_sync 00:05:28.244 EAL: No shared files mode enabled, IPC is disabled 00:05:28.244 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.503 EAL: Trying to obtain current memory policy. 00:05:28.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.503 EAL: Restoring previous memory policy: 4 00:05:28.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.503 EAL: request: mp_malloc_sync 00:05:28.503 EAL: No shared files mode enabled, IPC is disabled 00:05:28.503 EAL: Heap on socket 0 was expanded by 258MB 00:05:29.070 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.070 EAL: request: mp_malloc_sync 00:05:29.070 EAL: No shared files mode enabled, IPC is disabled 00:05:29.070 EAL: Heap on socket 0 was shrunk by 258MB 00:05:29.638 EAL: Trying to obtain current memory policy. 00:05:29.638 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.638 EAL: Restoring previous memory policy: 4 00:05:29.638 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.638 EAL: request: mp_malloc_sync 00:05:29.638 EAL: No shared files mode enabled, IPC is disabled 00:05:29.638 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.574 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.832 EAL: request: mp_malloc_sync 00:05:30.832 EAL: No shared files mode enabled, IPC is disabled 00:05:30.832 EAL: Heap on socket 0 was shrunk by 514MB 00:05:31.767 EAL: Trying to obtain current memory policy. 00:05:31.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.025 EAL: Restoring previous memory policy: 4 00:05:32.025 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.025 EAL: request: mp_malloc_sync 00:05:32.025 EAL: No shared files mode enabled, IPC is disabled 00:05:32.025 EAL: Heap on socket 0 was expanded by 1026MB 00:05:33.926 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.185 EAL: request: mp_malloc_sync 00:05:34.185 EAL: No shared files mode enabled, IPC is disabled 00:05:34.185 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.088 passed 00:05:36.088 00:05:36.088 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.088 suites 1 1 n/a 0 0 00:05:36.088 tests 2 2 2 0 0 00:05:36.088 asserts 5775 5775 5775 0 n/a 00:05:36.088 00:05:36.088 Elapsed time = 9.048 seconds 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 00:05:36.088 real 0m9.395s 00:05:36.088 user 0m7.922s 00:05:36.088 sys 0m1.305s 00:05:36.088 ************************************ 00:05:36.088 END TEST env_vtophys 00:05:36.088 ************************************ 00:05:36.088 11:09:13 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.088 11:09:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.088 11:09:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:36.088 11:09:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.088 11:09:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.088 11:09:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.088 ************************************ 00:05:36.088 START TEST env_pci 00:05:36.088 ************************************ 00:05:36.088 11:09:13 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:36.088 00:05:36.088 00:05:36.088 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.088 http://cunit.sourceforge.net/ 00:05:36.088 00:05:36.088 00:05:36.089 Suite: pci 00:05:36.089 Test: pci_hook ...[2024-11-15 11:09:13.287465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57648 has claimed it 00:05:36.089 passed 00:05:36.089 00:05:36.089 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.089 suites 1 1 n/a 0 0 00:05:36.089 tests 1 1 1 0 0 00:05:36.089 asserts 25 25 25 0 n/a 00:05:36.089 00:05:36.089 Elapsed time = 0.010 secondsEAL: Cannot find device (10000:00:01.0) 00:05:36.089 EAL: Failed to attach device on primary process 00:05:36.089 00:05:36.089 00:05:36.089 real 0m0.115s 00:05:36.089 user 0m0.050s 00:05:36.089 sys 0m0.064s 00:05:36.089 ************************************ 00:05:36.089 END TEST env_pci 00:05:36.089 ************************************ 00:05:36.089 11:09:13 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.089 11:09:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.089 11:09:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.089 11:09:13 env -- env/env.sh@15 -- # uname 00:05:36.089 11:09:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.089 11:09:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.089 11:09:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.089 11:09:13 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:36.089 11:09:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.089 11:09:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.089 ************************************ 00:05:36.089 START TEST env_dpdk_post_init 00:05:36.089 ************************************ 00:05:36.089 11:09:13 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.347 EAL: Detected CPU lcores: 10 00:05:36.347 EAL: Detected NUMA nodes: 1 00:05:36.347 EAL: Detected shared linkage of DPDK 00:05:36.347 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.347 EAL: Selected IOVA mode 'PA' 00:05:36.347 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:36.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:36.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:36.347 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:36.347 Starting DPDK initialization... 00:05:36.347 Starting SPDK post initialization... 00:05:36.347 SPDK NVMe probe 00:05:36.347 Attaching to 0000:00:10.0 00:05:36.347 Attaching to 0000:00:11.0 00:05:36.347 Attaching to 0000:00:12.0 00:05:36.347 Attaching to 0000:00:13.0 00:05:36.347 Attached to 0000:00:10.0 00:05:36.347 Attached to 0000:00:11.0 00:05:36.347 Attached to 0000:00:13.0 00:05:36.347 Attached to 0000:00:12.0 00:05:36.347 Cleaning up... 00:05:36.605 00:05:36.605 real 0m0.318s 00:05:36.605 user 0m0.103s 00:05:36.605 sys 0m0.118s 00:05:36.605 11:09:13 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.605 ************************************ 00:05:36.605 END TEST env_dpdk_post_init 00:05:36.605 ************************************ 00:05:36.605 11:09:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.605 11:09:13 env -- env/env.sh@26 -- # uname 00:05:36.605 11:09:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.605 11:09:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.605 11:09:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.605 11:09:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.605 11:09:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.605 ************************************ 00:05:36.605 START TEST env_mem_callbacks 00:05:36.605 ************************************ 00:05:36.605 11:09:13 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.605 EAL: Detected CPU lcores: 10 00:05:36.605 EAL: Detected NUMA nodes: 1 00:05:36.605 EAL: Detected shared linkage of DPDK 00:05:36.605 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.605 EAL: Selected IOVA mode 'PA' 00:05:36.863 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.863 00:05:36.863 00:05:36.863 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.863 http://cunit.sourceforge.net/ 00:05:36.863 00:05:36.863 00:05:36.863 Suite: memory 00:05:36.863 Test: test ... 00:05:36.863 register 0x200000200000 2097152 00:05:36.863 malloc 3145728 00:05:36.863 register 0x200000400000 4194304 00:05:36.863 buf 0x2000004fffc0 len 3145728 PASSED 00:05:36.863 malloc 64 00:05:36.863 buf 0x2000004ffec0 len 64 PASSED 00:05:36.863 malloc 4194304 00:05:36.863 register 0x200000800000 6291456 00:05:36.863 buf 0x2000009fffc0 len 4194304 PASSED 00:05:36.863 free 0x2000004fffc0 3145728 00:05:36.863 free 0x2000004ffec0 64 00:05:36.863 unregister 0x200000400000 4194304 PASSED 00:05:36.863 free 0x2000009fffc0 4194304 00:05:36.863 unregister 0x200000800000 6291456 PASSED 00:05:36.863 malloc 8388608 00:05:36.863 register 0x200000400000 10485760 00:05:36.863 buf 0x2000005fffc0 len 8388608 PASSED 00:05:36.863 free 0x2000005fffc0 8388608 00:05:36.863 unregister 0x200000400000 10485760 PASSED 00:05:36.863 passed 00:05:36.863 00:05:36.863 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.863 suites 1 1 n/a 0 0 00:05:36.863 tests 1 1 1 0 0 00:05:36.863 asserts 15 15 15 0 n/a 00:05:36.863 00:05:36.863 Elapsed time = 0.078 seconds 00:05:36.863 00:05:36.863 real 0m0.295s 00:05:36.863 user 0m0.099s 00:05:36.863 sys 0m0.094s 00:05:36.863 11:09:14 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.863 11:09:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:36.863 ************************************ 00:05:36.863 END TEST env_mem_callbacks 00:05:36.863 ************************************ 00:05:36.863 00:05:36.863 real 0m11.031s 00:05:36.863 user 0m8.661s 00:05:36.863 sys 0m1.988s 00:05:36.864 11:09:14 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.864 11:09:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.864 ************************************ 00:05:36.864 END TEST env 00:05:36.864 ************************************ 00:05:36.864 11:09:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:36.864 11:09:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.864 11:09:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.864 11:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:36.864 ************************************ 00:05:36.864 START TEST rpc 00:05:36.864 ************************************ 00:05:36.864 11:09:14 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:37.122 * Looking for test storage... 00:05:37.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.122 11:09:14 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:37.122 11:09:14 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:37.122 11:09:14 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:37.122 11:09:14 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:37.122 11:09:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.122 11:09:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.122 11:09:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.122 11:09:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.122 11:09:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.122 11:09:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.122 11:09:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.122 11:09:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.122 11:09:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.122 11:09:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.122 11:09:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.122 11:09:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.122 11:09:14 rpc -- scripts/common.sh@345 -- # : 1 00:05:37.122 11:09:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.122 11:09:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.122 11:09:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.122 11:09:14 rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.122 11:09:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.122 11:09:14 rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.122 11:09:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.122 11:09:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.122 11:09:14 rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.123 11:09:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.123 11:09:14 rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.123 11:09:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.123 11:09:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.123 11:09:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.123 11:09:14 rpc -- scripts/common.sh@368 -- # return 0 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:37.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.123 --rc genhtml_branch_coverage=1 00:05:37.123 --rc genhtml_function_coverage=1 00:05:37.123 --rc genhtml_legend=1 00:05:37.123 --rc geninfo_all_blocks=1 00:05:37.123 --rc geninfo_unexecuted_blocks=1 00:05:37.123 00:05:37.123 ' 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:37.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.123 --rc genhtml_branch_coverage=1 00:05:37.123 --rc genhtml_function_coverage=1 00:05:37.123 --rc genhtml_legend=1 00:05:37.123 --rc geninfo_all_blocks=1 00:05:37.123 --rc geninfo_unexecuted_blocks=1 00:05:37.123 00:05:37.123 ' 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:37.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.123 --rc genhtml_branch_coverage=1 00:05:37.123 --rc genhtml_function_coverage=1 00:05:37.123 --rc genhtml_legend=1 00:05:37.123 --rc geninfo_all_blocks=1 00:05:37.123 --rc geninfo_unexecuted_blocks=1 00:05:37.123 00:05:37.123 ' 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:37.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.123 --rc genhtml_branch_coverage=1 00:05:37.123 --rc genhtml_function_coverage=1 00:05:37.123 --rc genhtml_legend=1 00:05:37.123 --rc geninfo_all_blocks=1 00:05:37.123 --rc geninfo_unexecuted_blocks=1 00:05:37.123 00:05:37.123 ' 00:05:37.123 11:09:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57775 00:05:37.123 11:09:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:37.123 11:09:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.123 11:09:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57775 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@833 -- # '[' -z 57775 ']' 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.123 11:09:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.381 [2024-11-15 11:09:14.610583] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:05:37.381 [2024-11-15 11:09:14.611084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57775 ] 00:05:37.639 [2024-11-15 11:09:14.791646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.639 [2024-11-15 11:09:14.936471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:37.639 [2024-11-15 11:09:14.936555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57775' to capture a snapshot of events at runtime. 00:05:37.639 [2024-11-15 11:09:14.936579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.639 [2024-11-15 11:09:14.936596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.639 [2024-11-15 11:09:14.936607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57775 for offline analysis/debug. 00:05:37.639 [2024-11-15 11:09:14.938065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.575 11:09:15 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.575 11:09:15 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.575 11:09:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:38.575 11:09:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:38.575 11:09:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:38.575 11:09:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:38.575 11:09:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.575 11:09:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.575 11:09:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.575 ************************************ 00:05:38.575 START TEST rpc_integrity 00:05:38.575 ************************************ 00:05:38.575 11:09:15 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:38.575 11:09:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.575 11:09:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.575 11:09:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.575 11:09:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.575 11:09:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.575 11:09:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.835 11:09:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.835 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.835 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.835 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.835 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.835 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:38.835 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.835 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.835 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.835 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.835 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.835 { 00:05:38.835 "name": "Malloc0", 00:05:38.835 "aliases": [ 00:05:38.835 "664fc28e-73c0-4b15-acbc-b4379171f27e" 00:05:38.835 ], 00:05:38.835 "product_name": "Malloc disk", 00:05:38.835 "block_size": 512, 00:05:38.835 "num_blocks": 16384, 00:05:38.835 "uuid": "664fc28e-73c0-4b15-acbc-b4379171f27e", 00:05:38.835 "assigned_rate_limits": { 00:05:38.835 "rw_ios_per_sec": 0, 00:05:38.835 "rw_mbytes_per_sec": 0, 00:05:38.835 "r_mbytes_per_sec": 0, 00:05:38.835 "w_mbytes_per_sec": 0 00:05:38.835 }, 00:05:38.835 "claimed": false, 00:05:38.835 "zoned": false, 00:05:38.835 "supported_io_types": { 00:05:38.835 "read": true, 00:05:38.835 "write": true, 00:05:38.835 "unmap": true, 00:05:38.835 "flush": true, 00:05:38.835 "reset": true, 00:05:38.835 "nvme_admin": false, 00:05:38.835 "nvme_io": false, 00:05:38.835 "nvme_io_md": false, 00:05:38.835 "write_zeroes": true, 00:05:38.835 "zcopy": true, 00:05:38.835 "get_zone_info": false, 00:05:38.835 "zone_management": false, 00:05:38.835 "zone_append": false, 00:05:38.835 "compare": false, 00:05:38.835 "compare_and_write": false, 00:05:38.835 "abort": true, 00:05:38.835 "seek_hole": false, 00:05:38.835 "seek_data": false, 00:05:38.835 "copy": true, 00:05:38.835 "nvme_iov_md": false 00:05:38.835 }, 00:05:38.835 "memory_domains": [ 00:05:38.835 { 00:05:38.835 "dma_device_id": "system", 00:05:38.835 "dma_device_type": 1 00:05:38.835 }, 00:05:38.835 { 00:05:38.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.835 "dma_device_type": 2 00:05:38.835 } 00:05:38.836 ], 00:05:38.836 "driver_specific": {} 00:05:38.836 } 00:05:38.836 ]' 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.836 [2024-11-15 11:09:16.109073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:38.836 [2024-11-15 11:09:16.109143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.836 [2024-11-15 11:09:16.109178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:38.836 [2024-11-15 11:09:16.109198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.836 [2024-11-15 11:09:16.112054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.836 [2024-11-15 11:09:16.112105] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.836 Passthru0 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.836 { 00:05:38.836 "name": "Malloc0", 00:05:38.836 "aliases": [ 00:05:38.836 "664fc28e-73c0-4b15-acbc-b4379171f27e" 00:05:38.836 ], 00:05:38.836 "product_name": "Malloc disk", 00:05:38.836 "block_size": 512, 00:05:38.836 "num_blocks": 16384, 00:05:38.836 "uuid": "664fc28e-73c0-4b15-acbc-b4379171f27e", 00:05:38.836 "assigned_rate_limits": { 00:05:38.836 "rw_ios_per_sec": 0, 00:05:38.836 "rw_mbytes_per_sec": 0, 00:05:38.836 "r_mbytes_per_sec": 0, 00:05:38.836 "w_mbytes_per_sec": 0 00:05:38.836 }, 00:05:38.836 "claimed": true, 00:05:38.836 "claim_type": "exclusive_write", 00:05:38.836 "zoned": false, 00:05:38.836 "supported_io_types": { 00:05:38.836 "read": true, 00:05:38.836 "write": true, 00:05:38.836 "unmap": true, 00:05:38.836 "flush": true, 00:05:38.836 "reset": true, 00:05:38.836 "nvme_admin": false, 00:05:38.836 "nvme_io": false, 00:05:38.836 "nvme_io_md": false, 00:05:38.836 "write_zeroes": true, 00:05:38.836 "zcopy": true, 00:05:38.836 "get_zone_info": false, 00:05:38.836 "zone_management": false, 00:05:38.836 "zone_append": false, 00:05:38.836 "compare": false, 00:05:38.836 "compare_and_write": false, 00:05:38.836 "abort": true, 00:05:38.836 "seek_hole": false, 00:05:38.836 "seek_data": false, 00:05:38.836 "copy": true, 00:05:38.836 "nvme_iov_md": false 00:05:38.836 }, 00:05:38.836 "memory_domains": [ 00:05:38.836 { 00:05:38.836 "dma_device_id": "system", 00:05:38.836 "dma_device_type": 1 00:05:38.836 }, 00:05:38.836 { 00:05:38.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.836 "dma_device_type": 2 00:05:38.836 } 00:05:38.836 ], 00:05:38.836 "driver_specific": {} 00:05:38.836 }, 00:05:38.836 { 00:05:38.836 "name": "Passthru0", 00:05:38.836 "aliases": [ 00:05:38.836 "71ce8a88-1c89-51a8-b0f5-4e4eda0f5580" 00:05:38.836 ], 00:05:38.836 "product_name": "passthru", 00:05:38.836 "block_size": 512, 00:05:38.836 "num_blocks": 16384, 00:05:38.836 "uuid": "71ce8a88-1c89-51a8-b0f5-4e4eda0f5580", 00:05:38.836 "assigned_rate_limits": { 00:05:38.836 "rw_ios_per_sec": 0, 00:05:38.836 "rw_mbytes_per_sec": 0, 00:05:38.836 "r_mbytes_per_sec": 0, 00:05:38.836 "w_mbytes_per_sec": 0 00:05:38.836 }, 00:05:38.836 "claimed": false, 00:05:38.836 "zoned": false, 00:05:38.836 "supported_io_types": { 00:05:38.836 "read": true, 00:05:38.836 "write": true, 00:05:38.836 "unmap": true, 00:05:38.836 "flush": true, 00:05:38.836 "reset": true, 00:05:38.836 "nvme_admin": false, 00:05:38.836 "nvme_io": false, 00:05:38.836 "nvme_io_md": false, 00:05:38.836 "write_zeroes": true, 00:05:38.836 "zcopy": true, 00:05:38.836 "get_zone_info": false, 00:05:38.836 "zone_management": false, 00:05:38.836 "zone_append": false, 00:05:38.836 "compare": false, 00:05:38.836 "compare_and_write": false, 00:05:38.836 "abort": true, 00:05:38.836 "seek_hole": false, 00:05:38.836 "seek_data": false, 00:05:38.836 "copy": true, 00:05:38.836 "nvme_iov_md": false 00:05:38.836 }, 00:05:38.836 "memory_domains": [ 00:05:38.836 { 00:05:38.836 "dma_device_id": "system", 00:05:38.836 "dma_device_type": 1 00:05:38.836 }, 00:05:38.836 { 00:05:38.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.836 "dma_device_type": 2 00:05:38.836 } 00:05:38.836 ], 00:05:38.836 "driver_specific": { 00:05:38.836 "passthru": { 00:05:38.836 "name": "Passthru0", 00:05:38.836 "base_bdev_name": "Malloc0" 00:05:38.836 } 00:05:38.836 } 00:05:38.836 } 00:05:38.836 ]' 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.836 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.836 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.096 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.096 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.096 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.096 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.096 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.096 11:09:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.096 00:05:39.096 real 0m0.367s 00:05:39.096 user 0m0.197s 00:05:39.096 sys 0m0.068s 00:05:39.096 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.096 11:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 ************************************ 00:05:39.096 END TEST rpc_integrity 00:05:39.096 ************************************ 00:05:39.096 11:09:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.096 11:09:16 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.096 11:09:16 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.096 11:09:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 ************************************ 00:05:39.096 START TEST rpc_plugins 00:05:39.096 ************************************ 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.096 { 00:05:39.096 "name": "Malloc1", 00:05:39.096 "aliases": [ 00:05:39.096 "5d60c93c-64e0-4107-92b1-82107328ea5c" 00:05:39.096 ], 00:05:39.096 "product_name": "Malloc disk", 00:05:39.096 "block_size": 4096, 00:05:39.096 "num_blocks": 256, 00:05:39.096 "uuid": "5d60c93c-64e0-4107-92b1-82107328ea5c", 00:05:39.096 "assigned_rate_limits": { 00:05:39.096 "rw_ios_per_sec": 0, 00:05:39.096 "rw_mbytes_per_sec": 0, 00:05:39.096 "r_mbytes_per_sec": 0, 00:05:39.096 "w_mbytes_per_sec": 0 00:05:39.096 }, 00:05:39.096 "claimed": false, 00:05:39.096 "zoned": false, 00:05:39.096 "supported_io_types": { 00:05:39.096 "read": true, 00:05:39.096 "write": true, 00:05:39.096 "unmap": true, 00:05:39.096 "flush": true, 00:05:39.096 "reset": true, 00:05:39.096 "nvme_admin": false, 00:05:39.096 "nvme_io": false, 00:05:39.096 "nvme_io_md": false, 00:05:39.096 "write_zeroes": true, 00:05:39.096 "zcopy": true, 00:05:39.096 "get_zone_info": false, 00:05:39.096 "zone_management": false, 00:05:39.096 "zone_append": false, 00:05:39.096 "compare": false, 00:05:39.096 "compare_and_write": false, 00:05:39.096 "abort": true, 00:05:39.096 "seek_hole": false, 00:05:39.096 "seek_data": false, 00:05:39.096 "copy": true, 00:05:39.096 "nvme_iov_md": false 00:05:39.096 }, 00:05:39.096 "memory_domains": [ 00:05:39.096 { 00:05:39.096 "dma_device_id": "system", 00:05:39.096 "dma_device_type": 1 00:05:39.096 }, 00:05:39.096 { 00:05:39.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.096 "dma_device_type": 2 00:05:39.096 } 00:05:39.096 ], 00:05:39.096 "driver_specific": {} 00:05:39.096 } 00:05:39.096 ]' 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.096 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.096 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.355 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.355 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:39.355 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:39.355 11:09:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:39.355 00:05:39.355 real 0m0.180s 00:05:39.355 user 0m0.101s 00:05:39.355 sys 0m0.037s 00:05:39.355 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.355 11:09:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.355 ************************************ 00:05:39.355 END TEST rpc_plugins 00:05:39.355 ************************************ 00:05:39.355 11:09:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:39.355 11:09:16 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.355 11:09:16 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.355 11:09:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.355 ************************************ 00:05:39.355 START TEST rpc_trace_cmd_test 00:05:39.355 ************************************ 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:39.355 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57775", 00:05:39.355 "tpoint_group_mask": "0x8", 00:05:39.355 "iscsi_conn": { 00:05:39.355 "mask": "0x2", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "scsi": { 00:05:39.355 "mask": "0x4", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "bdev": { 00:05:39.355 "mask": "0x8", 00:05:39.355 "tpoint_mask": "0xffffffffffffffff" 00:05:39.355 }, 00:05:39.355 "nvmf_rdma": { 00:05:39.355 "mask": "0x10", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "nvmf_tcp": { 00:05:39.355 "mask": "0x20", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "ftl": { 00:05:39.355 "mask": "0x40", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "blobfs": { 00:05:39.355 "mask": "0x80", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "dsa": { 00:05:39.355 "mask": "0x200", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "thread": { 00:05:39.355 "mask": "0x400", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "nvme_pcie": { 00:05:39.355 "mask": "0x800", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "iaa": { 00:05:39.355 "mask": "0x1000", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "nvme_tcp": { 00:05:39.355 "mask": "0x2000", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "bdev_nvme": { 00:05:39.355 "mask": "0x4000", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "sock": { 00:05:39.355 "mask": "0x8000", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "blob": { 00:05:39.355 "mask": "0x10000", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "bdev_raid": { 00:05:39.355 "mask": "0x20000", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 }, 00:05:39.355 "scheduler": { 00:05:39.355 "mask": "0x40000", 00:05:39.355 "tpoint_mask": "0x0" 00:05:39.355 } 00:05:39.355 }' 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:39.355 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:39.615 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:39.615 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:39.615 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:39.615 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:39.615 11:09:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:39.615 00:05:39.615 real 0m0.223s 00:05:39.615 user 0m0.174s 00:05:39.615 sys 0m0.036s 00:05:39.615 11:09:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.615 11:09:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.615 ************************************ 00:05:39.615 END TEST rpc_trace_cmd_test 00:05:39.615 ************************************ 00:05:39.615 11:09:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:39.615 11:09:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:39.615 11:09:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:39.615 11:09:16 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.615 11:09:16 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.615 11:09:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.615 ************************************ 00:05:39.615 START TEST rpc_daemon_integrity 00:05:39.615 ************************************ 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.615 11:09:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.875 { 00:05:39.875 "name": "Malloc2", 00:05:39.875 "aliases": [ 00:05:39.875 "571a3bdb-db19-4122-be11-805f977af493" 00:05:39.875 ], 00:05:39.875 "product_name": "Malloc disk", 00:05:39.875 "block_size": 512, 00:05:39.875 "num_blocks": 16384, 00:05:39.875 "uuid": "571a3bdb-db19-4122-be11-805f977af493", 00:05:39.875 "assigned_rate_limits": { 00:05:39.875 "rw_ios_per_sec": 0, 00:05:39.875 "rw_mbytes_per_sec": 0, 00:05:39.875 "r_mbytes_per_sec": 0, 00:05:39.875 "w_mbytes_per_sec": 0 00:05:39.875 }, 00:05:39.875 "claimed": false, 00:05:39.875 "zoned": false, 00:05:39.875 "supported_io_types": { 00:05:39.875 "read": true, 00:05:39.875 "write": true, 00:05:39.875 "unmap": true, 00:05:39.875 "flush": true, 00:05:39.875 "reset": true, 00:05:39.875 "nvme_admin": false, 00:05:39.875 "nvme_io": false, 00:05:39.875 "nvme_io_md": false, 00:05:39.875 "write_zeroes": true, 00:05:39.875 "zcopy": true, 00:05:39.875 "get_zone_info": false, 00:05:39.875 "zone_management": false, 00:05:39.875 "zone_append": false, 00:05:39.875 "compare": false, 00:05:39.875 "compare_and_write": false, 00:05:39.875 "abort": true, 00:05:39.875 "seek_hole": false, 00:05:39.875 "seek_data": false, 00:05:39.875 "copy": true, 00:05:39.875 "nvme_iov_md": false 00:05:39.875 }, 00:05:39.875 "memory_domains": [ 00:05:39.875 { 00:05:39.875 "dma_device_id": "system", 00:05:39.875 "dma_device_type": 1 00:05:39.875 }, 00:05:39.875 { 00:05:39.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.875 "dma_device_type": 2 00:05:39.875 } 00:05:39.875 ], 00:05:39.875 "driver_specific": {} 00:05:39.875 } 00:05:39.875 ]' 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.875 [2024-11-15 11:09:17.064935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:39.875 [2024-11-15 11:09:17.064999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.875 [2024-11-15 11:09:17.065023] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:39.875 [2024-11-15 11:09:17.065039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.875 [2024-11-15 11:09:17.067854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.875 [2024-11-15 11:09:17.067901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.875 Passthru0 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.875 { 00:05:39.875 "name": "Malloc2", 00:05:39.875 "aliases": [ 00:05:39.875 "571a3bdb-db19-4122-be11-805f977af493" 00:05:39.875 ], 00:05:39.875 "product_name": "Malloc disk", 00:05:39.875 "block_size": 512, 00:05:39.875 "num_blocks": 16384, 00:05:39.875 "uuid": "571a3bdb-db19-4122-be11-805f977af493", 00:05:39.875 "assigned_rate_limits": { 00:05:39.875 "rw_ios_per_sec": 0, 00:05:39.875 "rw_mbytes_per_sec": 0, 00:05:39.875 "r_mbytes_per_sec": 0, 00:05:39.875 "w_mbytes_per_sec": 0 00:05:39.875 }, 00:05:39.875 "claimed": true, 00:05:39.875 "claim_type": "exclusive_write", 00:05:39.875 "zoned": false, 00:05:39.875 "supported_io_types": { 00:05:39.875 "read": true, 00:05:39.875 "write": true, 00:05:39.875 "unmap": true, 00:05:39.875 "flush": true, 00:05:39.875 "reset": true, 00:05:39.875 "nvme_admin": false, 00:05:39.875 "nvme_io": false, 00:05:39.875 "nvme_io_md": false, 00:05:39.875 "write_zeroes": true, 00:05:39.875 "zcopy": true, 00:05:39.875 "get_zone_info": false, 00:05:39.875 "zone_management": false, 00:05:39.875 "zone_append": false, 00:05:39.875 "compare": false, 00:05:39.875 "compare_and_write": false, 00:05:39.875 "abort": true, 00:05:39.875 "seek_hole": false, 00:05:39.875 "seek_data": false, 00:05:39.875 "copy": true, 00:05:39.875 "nvme_iov_md": false 00:05:39.875 }, 00:05:39.875 "memory_domains": [ 00:05:39.875 { 00:05:39.875 "dma_device_id": "system", 00:05:39.875 "dma_device_type": 1 00:05:39.875 }, 00:05:39.875 { 00:05:39.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.875 "dma_device_type": 2 00:05:39.875 } 00:05:39.875 ], 00:05:39.875 "driver_specific": {} 00:05:39.875 }, 00:05:39.875 { 00:05:39.875 "name": "Passthru0", 00:05:39.875 "aliases": [ 00:05:39.875 "cd83e42e-a23c-513e-87f4-cee19222cda5" 00:05:39.875 ], 00:05:39.875 "product_name": "passthru", 00:05:39.875 "block_size": 512, 00:05:39.875 "num_blocks": 16384, 00:05:39.875 "uuid": "cd83e42e-a23c-513e-87f4-cee19222cda5", 00:05:39.875 "assigned_rate_limits": { 00:05:39.875 "rw_ios_per_sec": 0, 00:05:39.875 "rw_mbytes_per_sec": 0, 00:05:39.875 "r_mbytes_per_sec": 0, 00:05:39.875 "w_mbytes_per_sec": 0 00:05:39.875 }, 00:05:39.875 "claimed": false, 00:05:39.875 "zoned": false, 00:05:39.875 "supported_io_types": { 00:05:39.875 "read": true, 00:05:39.875 "write": true, 00:05:39.875 "unmap": true, 00:05:39.875 "flush": true, 00:05:39.875 "reset": true, 00:05:39.875 "nvme_admin": false, 00:05:39.875 "nvme_io": false, 00:05:39.875 "nvme_io_md": false, 00:05:39.875 "write_zeroes": true, 00:05:39.875 "zcopy": true, 00:05:39.875 "get_zone_info": false, 00:05:39.875 "zone_management": false, 00:05:39.875 "zone_append": false, 00:05:39.875 "compare": false, 00:05:39.875 "compare_and_write": false, 00:05:39.875 "abort": true, 00:05:39.875 "seek_hole": false, 00:05:39.875 "seek_data": false, 00:05:39.875 "copy": true, 00:05:39.875 "nvme_iov_md": false 00:05:39.875 }, 00:05:39.875 "memory_domains": [ 00:05:39.875 { 00:05:39.875 "dma_device_id": "system", 00:05:39.875 "dma_device_type": 1 00:05:39.875 }, 00:05:39.875 { 00:05:39.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.875 "dma_device_type": 2 00:05:39.875 } 00:05:39.875 ], 00:05:39.875 "driver_specific": { 00:05:39.875 "passthru": { 00:05:39.875 "name": "Passthru0", 00:05:39.875 "base_bdev_name": "Malloc2" 00:05:39.875 } 00:05:39.875 } 00:05:39.875 } 00:05:39.875 ]' 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.875 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.876 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.876 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.876 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.876 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.876 11:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.876 00:05:39.876 real 0m0.334s 00:05:39.876 user 0m0.185s 00:05:39.876 sys 0m0.054s 00:05:39.876 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.876 11:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.876 ************************************ 00:05:39.876 END TEST rpc_daemon_integrity 00:05:39.876 ************************************ 00:05:40.134 11:09:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.134 11:09:17 rpc -- rpc/rpc.sh@84 -- # killprocess 57775 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@952 -- # '[' -z 57775 ']' 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@956 -- # kill -0 57775 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@957 -- # uname 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57775 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.134 killing process with pid 57775 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57775' 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@971 -- # kill 57775 00:05:40.134 11:09:17 rpc -- common/autotest_common.sh@976 -- # wait 57775 00:05:42.663 00:05:42.663 real 0m5.719s 00:05:42.663 user 0m6.165s 00:05:42.663 sys 0m1.157s 00:05:42.663 11:09:19 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.663 11:09:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.663 ************************************ 00:05:42.663 END TEST rpc 00:05:42.663 ************************************ 00:05:42.663 11:09:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.663 11:09:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.663 11:09:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.663 11:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:42.663 ************************************ 00:05:42.663 START TEST skip_rpc 00:05:42.663 ************************************ 00:05:42.663 11:09:20 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:42.921 * Looking for test storage... 00:05:42.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.921 11:09:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.921 --rc genhtml_branch_coverage=1 00:05:42.921 --rc genhtml_function_coverage=1 00:05:42.921 --rc genhtml_legend=1 00:05:42.921 --rc geninfo_all_blocks=1 00:05:42.921 --rc geninfo_unexecuted_blocks=1 00:05:42.921 00:05:42.921 ' 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.921 --rc genhtml_branch_coverage=1 00:05:42.921 --rc genhtml_function_coverage=1 00:05:42.921 --rc genhtml_legend=1 00:05:42.921 --rc geninfo_all_blocks=1 00:05:42.921 --rc geninfo_unexecuted_blocks=1 00:05:42.921 00:05:42.921 ' 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.921 --rc genhtml_branch_coverage=1 00:05:42.921 --rc genhtml_function_coverage=1 00:05:42.921 --rc genhtml_legend=1 00:05:42.921 --rc geninfo_all_blocks=1 00:05:42.921 --rc geninfo_unexecuted_blocks=1 00:05:42.921 00:05:42.921 ' 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.921 --rc genhtml_branch_coverage=1 00:05:42.921 --rc genhtml_function_coverage=1 00:05:42.921 --rc genhtml_legend=1 00:05:42.921 --rc geninfo_all_blocks=1 00:05:42.921 --rc geninfo_unexecuted_blocks=1 00:05:42.921 00:05:42.921 ' 00:05:42.921 11:09:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.921 11:09:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:42.921 11:09:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.921 11:09:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.921 ************************************ 00:05:42.921 START TEST skip_rpc 00:05:42.921 ************************************ 00:05:42.921 11:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:43.179 11:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58015 00:05:43.179 11:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.179 11:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.179 11:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.179 [2024-11-15 11:09:20.430490] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:05:43.179 [2024-11-15 11:09:20.430623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58015 ] 00:05:43.437 [2024-11-15 11:09:20.613637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.437 [2024-11-15 11:09:20.749133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58015 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58015 ']' 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58015 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58015 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.705 killing process with pid 58015 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58015' 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58015 00:05:48.705 11:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58015 00:05:50.610 00:05:50.610 real 0m7.660s 00:05:50.610 user 0m7.026s 00:05:50.610 sys 0m0.550s 00:05:50.610 11:09:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.610 ************************************ 00:05:50.610 END TEST skip_rpc 00:05:50.610 ************************************ 00:05:50.610 11:09:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 11:09:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:50.870 11:09:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.870 11:09:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.870 11:09:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 ************************************ 00:05:50.870 START TEST skip_rpc_with_json 00:05:50.870 ************************************ 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58119 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58119 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58119 ']' 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.870 11:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 [2024-11-15 11:09:28.165337] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:05:50.870 [2024-11-15 11:09:28.165468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58119 ] 00:05:51.130 [2024-11-15 11:09:28.347442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.130 [2024-11-15 11:09:28.476479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.191 [2024-11-15 11:09:29.483583] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:52.191 request: 00:05:52.191 { 00:05:52.191 "trtype": "tcp", 00:05:52.191 "method": "nvmf_get_transports", 00:05:52.191 "req_id": 1 00:05:52.191 } 00:05:52.191 Got JSON-RPC error response 00:05:52.191 response: 00:05:52.191 { 00:05:52.191 "code": -19, 00:05:52.191 "message": "No such device" 00:05:52.191 } 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.191 [2024-11-15 11:09:29.495673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.191 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.449 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.449 11:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.449 { 00:05:52.449 "subsystems": [ 00:05:52.449 { 00:05:52.449 "subsystem": "fsdev", 00:05:52.449 "config": [ 00:05:52.449 { 00:05:52.449 "method": "fsdev_set_opts", 00:05:52.449 "params": { 00:05:52.449 "fsdev_io_pool_size": 65535, 00:05:52.449 "fsdev_io_cache_size": 256 00:05:52.449 } 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "keyring", 00:05:52.450 "config": [] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "iobuf", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "method": "iobuf_set_options", 00:05:52.450 "params": { 00:05:52.450 "small_pool_count": 8192, 00:05:52.450 "large_pool_count": 1024, 00:05:52.450 "small_bufsize": 8192, 00:05:52.450 "large_bufsize": 135168, 00:05:52.450 "enable_numa": false 00:05:52.450 } 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "sock", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "method": "sock_set_default_impl", 00:05:52.450 "params": { 00:05:52.450 "impl_name": "posix" 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "sock_impl_set_options", 00:05:52.450 "params": { 00:05:52.450 "impl_name": "ssl", 00:05:52.450 "recv_buf_size": 4096, 00:05:52.450 "send_buf_size": 4096, 00:05:52.450 "enable_recv_pipe": true, 00:05:52.450 "enable_quickack": false, 00:05:52.450 "enable_placement_id": 0, 00:05:52.450 "enable_zerocopy_send_server": true, 00:05:52.450 "enable_zerocopy_send_client": false, 00:05:52.450 "zerocopy_threshold": 0, 00:05:52.450 "tls_version": 0, 00:05:52.450 "enable_ktls": false 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "sock_impl_set_options", 00:05:52.450 "params": { 00:05:52.450 "impl_name": "posix", 00:05:52.450 "recv_buf_size": 2097152, 00:05:52.450 "send_buf_size": 2097152, 00:05:52.450 "enable_recv_pipe": true, 00:05:52.450 "enable_quickack": false, 00:05:52.450 "enable_placement_id": 0, 00:05:52.450 "enable_zerocopy_send_server": true, 00:05:52.450 "enable_zerocopy_send_client": false, 00:05:52.450 "zerocopy_threshold": 0, 00:05:52.450 "tls_version": 0, 00:05:52.450 "enable_ktls": false 00:05:52.450 } 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "vmd", 00:05:52.450 "config": [] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "accel", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "method": "accel_set_options", 00:05:52.450 "params": { 00:05:52.450 "small_cache_size": 128, 00:05:52.450 "large_cache_size": 16, 00:05:52.450 "task_count": 2048, 00:05:52.450 "sequence_count": 2048, 00:05:52.450 "buf_count": 2048 00:05:52.450 } 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "bdev", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "method": "bdev_set_options", 00:05:52.450 "params": { 00:05:52.450 "bdev_io_pool_size": 65535, 00:05:52.450 "bdev_io_cache_size": 256, 00:05:52.450 "bdev_auto_examine": true, 00:05:52.450 "iobuf_small_cache_size": 128, 00:05:52.450 "iobuf_large_cache_size": 16 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "bdev_raid_set_options", 00:05:52.450 "params": { 00:05:52.450 "process_window_size_kb": 1024, 00:05:52.450 "process_max_bandwidth_mb_sec": 0 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "bdev_iscsi_set_options", 00:05:52.450 "params": { 00:05:52.450 "timeout_sec": 30 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "bdev_nvme_set_options", 00:05:52.450 "params": { 00:05:52.450 "action_on_timeout": "none", 00:05:52.450 "timeout_us": 0, 00:05:52.450 "timeout_admin_us": 0, 00:05:52.450 "keep_alive_timeout_ms": 10000, 00:05:52.450 "arbitration_burst": 0, 00:05:52.450 "low_priority_weight": 0, 00:05:52.450 "medium_priority_weight": 0, 00:05:52.450 "high_priority_weight": 0, 00:05:52.450 "nvme_adminq_poll_period_us": 10000, 00:05:52.450 "nvme_ioq_poll_period_us": 0, 00:05:52.450 "io_queue_requests": 0, 00:05:52.450 "delay_cmd_submit": true, 00:05:52.450 "transport_retry_count": 4, 00:05:52.450 "bdev_retry_count": 3, 00:05:52.450 "transport_ack_timeout": 0, 00:05:52.450 "ctrlr_loss_timeout_sec": 0, 00:05:52.450 "reconnect_delay_sec": 0, 00:05:52.450 "fast_io_fail_timeout_sec": 0, 00:05:52.450 "disable_auto_failback": false, 00:05:52.450 "generate_uuids": false, 00:05:52.450 "transport_tos": 0, 00:05:52.450 "nvme_error_stat": false, 00:05:52.450 "rdma_srq_size": 0, 00:05:52.450 "io_path_stat": false, 00:05:52.450 "allow_accel_sequence": false, 00:05:52.450 "rdma_max_cq_size": 0, 00:05:52.450 "rdma_cm_event_timeout_ms": 0, 00:05:52.450 "dhchap_digests": [ 00:05:52.450 "sha256", 00:05:52.450 "sha384", 00:05:52.450 "sha512" 00:05:52.450 ], 00:05:52.450 "dhchap_dhgroups": [ 00:05:52.450 "null", 00:05:52.450 "ffdhe2048", 00:05:52.450 "ffdhe3072", 00:05:52.450 "ffdhe4096", 00:05:52.450 "ffdhe6144", 00:05:52.450 "ffdhe8192" 00:05:52.450 ] 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "bdev_nvme_set_hotplug", 00:05:52.450 "params": { 00:05:52.450 "period_us": 100000, 00:05:52.450 "enable": false 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "bdev_wait_for_examine" 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "scsi", 00:05:52.450 "config": null 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "scheduler", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "method": "framework_set_scheduler", 00:05:52.450 "params": { 00:05:52.450 "name": "static" 00:05:52.450 } 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "vhost_scsi", 00:05:52.450 "config": [] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "vhost_blk", 00:05:52.450 "config": [] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "ublk", 00:05:52.450 "config": [] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "nbd", 00:05:52.450 "config": [] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "nvmf", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "method": "nvmf_set_config", 00:05:52.450 "params": { 00:05:52.450 "discovery_filter": "match_any", 00:05:52.450 "admin_cmd_passthru": { 00:05:52.450 "identify_ctrlr": false 00:05:52.450 }, 00:05:52.450 "dhchap_digests": [ 00:05:52.450 "sha256", 00:05:52.450 "sha384", 00:05:52.450 "sha512" 00:05:52.450 ], 00:05:52.450 "dhchap_dhgroups": [ 00:05:52.450 "null", 00:05:52.450 "ffdhe2048", 00:05:52.450 "ffdhe3072", 00:05:52.450 "ffdhe4096", 00:05:52.450 "ffdhe6144", 00:05:52.450 "ffdhe8192" 00:05:52.450 ] 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "nvmf_set_max_subsystems", 00:05:52.450 "params": { 00:05:52.450 "max_subsystems": 1024 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "nvmf_set_crdt", 00:05:52.450 "params": { 00:05:52.450 "crdt1": 0, 00:05:52.450 "crdt2": 0, 00:05:52.450 "crdt3": 0 00:05:52.450 } 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "method": "nvmf_create_transport", 00:05:52.450 "params": { 00:05:52.450 "trtype": "TCP", 00:05:52.450 "max_queue_depth": 128, 00:05:52.450 "max_io_qpairs_per_ctrlr": 127, 00:05:52.450 "in_capsule_data_size": 4096, 00:05:52.450 "max_io_size": 131072, 00:05:52.450 "io_unit_size": 131072, 00:05:52.450 "max_aq_depth": 128, 00:05:52.450 "num_shared_buffers": 511, 00:05:52.450 "buf_cache_size": 4294967295, 00:05:52.450 "dif_insert_or_strip": false, 00:05:52.450 "zcopy": false, 00:05:52.450 "c2h_success": true, 00:05:52.450 "sock_priority": 0, 00:05:52.450 "abort_timeout_sec": 1, 00:05:52.450 "ack_timeout": 0, 00:05:52.450 "data_wr_pool_size": 0 00:05:52.450 } 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 }, 00:05:52.450 { 00:05:52.450 "subsystem": "iscsi", 00:05:52.450 "config": [ 00:05:52.450 { 00:05:52.450 "method": "iscsi_set_options", 00:05:52.450 "params": { 00:05:52.450 "node_base": "iqn.2016-06.io.spdk", 00:05:52.450 "max_sessions": 128, 00:05:52.450 "max_connections_per_session": 2, 00:05:52.450 "max_queue_depth": 64, 00:05:52.450 "default_time2wait": 2, 00:05:52.450 "default_time2retain": 20, 00:05:52.450 "first_burst_length": 8192, 00:05:52.450 "immediate_data": true, 00:05:52.450 "allow_duplicated_isid": false, 00:05:52.450 "error_recovery_level": 0, 00:05:52.450 "nop_timeout": 60, 00:05:52.450 "nop_in_interval": 30, 00:05:52.450 "disable_chap": false, 00:05:52.450 "require_chap": false, 00:05:52.450 "mutual_chap": false, 00:05:52.450 "chap_group": 0, 00:05:52.450 "max_large_datain_per_connection": 64, 00:05:52.450 "max_r2t_per_connection": 4, 00:05:52.450 "pdu_pool_size": 36864, 00:05:52.450 "immediate_data_pool_size": 16384, 00:05:52.450 "data_out_pool_size": 2048 00:05:52.450 } 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 } 00:05:52.450 ] 00:05:52.450 } 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58119 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58119 ']' 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58119 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58119 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:52.451 killing process with pid 58119 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58119' 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58119 00:05:52.451 11:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58119 00:05:55.736 11:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58175 00:05:55.736 11:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.736 11:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58175 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58175 ']' 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58175 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58175 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:01.010 killing process with pid 58175 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58175' 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58175 00:06:01.010 11:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58175 00:06:02.917 11:09:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:02.917 11:09:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:02.918 00:06:02.918 real 0m12.003s 00:06:02.918 user 0m11.117s 00:06:02.918 sys 0m1.220s 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.918 ************************************ 00:06:02.918 END TEST skip_rpc_with_json 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.918 ************************************ 00:06:02.918 11:09:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:02.918 11:09:40 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.918 11:09:40 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.918 11:09:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.918 ************************************ 00:06:02.918 START TEST skip_rpc_with_delay 00:06:02.918 ************************************ 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.918 [2024-11-15 11:09:40.241875] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.918 00:06:02.918 real 0m0.187s 00:06:02.918 user 0m0.090s 00:06:02.918 sys 0m0.095s 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.918 11:09:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:02.918 ************************************ 00:06:02.918 END TEST skip_rpc_with_delay 00:06:02.918 ************************************ 00:06:03.177 11:09:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.177 11:09:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.177 11:09:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.177 11:09:40 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.177 11:09:40 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.177 11:09:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.177 ************************************ 00:06:03.177 START TEST exit_on_failed_rpc_init 00:06:03.177 ************************************ 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58314 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58314 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58314 ']' 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.177 11:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.177 [2024-11-15 11:09:40.512433] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:03.177 [2024-11-15 11:09:40.512582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58314 ] 00:06:03.436 [2024-11-15 11:09:40.700287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.694 [2024-11-15 11:09:40.840421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.632 11:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.632 [2024-11-15 11:09:41.943486] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:04.632 [2024-11-15 11:09:41.943613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58338 ] 00:06:04.891 [2024-11-15 11:09:42.124950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.891 [2024-11-15 11:09:42.237245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.891 [2024-11-15 11:09:42.237343] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:04.891 [2024-11-15 11:09:42.237360] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:04.891 [2024-11-15 11:09:42.237378] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58314 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58314 ']' 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58314 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58314 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:05.150 killing process with pid 58314 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:05.150 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58314' 00:06:05.151 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58314 00:06:05.151 11:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58314 00:06:08.446 00:06:08.446 real 0m4.749s 00:06:08.446 user 0m4.896s 00:06:08.446 sys 0m0.759s 00:06:08.446 11:09:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.446 11:09:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 END TEST exit_on_failed_rpc_init 00:06:08.446 ************************************ 00:06:08.446 11:09:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:08.446 00:06:08.446 real 0m25.160s 00:06:08.446 user 0m23.366s 00:06:08.446 sys 0m2.955s 00:06:08.446 11:09:45 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.446 11:09:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 END TEST skip_rpc 00:06:08.446 ************************************ 00:06:08.446 11:09:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.446 11:09:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.446 11:09:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.446 11:09:45 -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 START TEST rpc_client 00:06:08.446 ************************************ 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.446 * Looking for test storage... 00:06:08.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.446 11:09:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.446 --rc genhtml_branch_coverage=1 00:06:08.446 --rc genhtml_function_coverage=1 00:06:08.446 --rc genhtml_legend=1 00:06:08.446 --rc geninfo_all_blocks=1 00:06:08.446 --rc geninfo_unexecuted_blocks=1 00:06:08.446 00:06:08.446 ' 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.446 --rc genhtml_branch_coverage=1 00:06:08.446 --rc genhtml_function_coverage=1 00:06:08.446 --rc genhtml_legend=1 00:06:08.446 --rc geninfo_all_blocks=1 00:06:08.446 --rc geninfo_unexecuted_blocks=1 00:06:08.446 00:06:08.446 ' 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.446 --rc genhtml_branch_coverage=1 00:06:08.446 --rc genhtml_function_coverage=1 00:06:08.446 --rc genhtml_legend=1 00:06:08.446 --rc geninfo_all_blocks=1 00:06:08.446 --rc geninfo_unexecuted_blocks=1 00:06:08.446 00:06:08.446 ' 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.446 --rc genhtml_branch_coverage=1 00:06:08.446 --rc genhtml_function_coverage=1 00:06:08.446 --rc genhtml_legend=1 00:06:08.446 --rc geninfo_all_blocks=1 00:06:08.446 --rc geninfo_unexecuted_blocks=1 00:06:08.446 00:06:08.446 ' 00:06:08.446 11:09:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:08.446 OK 00:06:08.446 11:09:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.446 00:06:08.446 real 0m0.313s 00:06:08.446 user 0m0.184s 00:06:08.446 sys 0m0.149s 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.446 11:09:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 END TEST rpc_client 00:06:08.446 ************************************ 00:06:08.446 11:09:45 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:08.446 11:09:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.446 11:09:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.446 11:09:45 -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 ************************************ 00:06:08.446 START TEST json_config 00:06:08.446 ************************************ 00:06:08.447 11:09:45 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:08.447 11:09:45 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.447 11:09:45 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.447 11:09:45 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.447 11:09:45 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.447 11:09:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.447 11:09:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.447 11:09:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.447 11:09:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.447 11:09:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.447 11:09:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.447 11:09:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.447 11:09:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.447 11:09:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.447 11:09:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.447 11:09:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.447 11:09:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:08.447 11:09:45 json_config -- scripts/common.sh@345 -- # : 1 00:06:08.447 11:09:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.447 11:09:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.447 11:09:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:08.711 11:09:45 json_config -- scripts/common.sh@353 -- # local d=1 00:06:08.711 11:09:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.711 11:09:45 json_config -- scripts/common.sh@355 -- # echo 1 00:06:08.711 11:09:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.711 11:09:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:08.711 11:09:45 json_config -- scripts/common.sh@353 -- # local d=2 00:06:08.711 11:09:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.711 11:09:45 json_config -- scripts/common.sh@355 -- # echo 2 00:06:08.711 11:09:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.711 11:09:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.711 11:09:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.711 11:09:45 json_config -- scripts/common.sh@368 -- # return 0 00:06:08.711 11:09:45 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.711 11:09:45 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.711 --rc genhtml_branch_coverage=1 00:06:08.711 --rc genhtml_function_coverage=1 00:06:08.711 --rc genhtml_legend=1 00:06:08.711 --rc geninfo_all_blocks=1 00:06:08.711 --rc geninfo_unexecuted_blocks=1 00:06:08.711 00:06:08.711 ' 00:06:08.711 11:09:45 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.711 --rc genhtml_branch_coverage=1 00:06:08.711 --rc genhtml_function_coverage=1 00:06:08.711 --rc genhtml_legend=1 00:06:08.711 --rc geninfo_all_blocks=1 00:06:08.711 --rc geninfo_unexecuted_blocks=1 00:06:08.711 00:06:08.711 ' 00:06:08.711 11:09:45 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.711 --rc genhtml_branch_coverage=1 00:06:08.711 --rc genhtml_function_coverage=1 00:06:08.711 --rc genhtml_legend=1 00:06:08.711 --rc geninfo_all_blocks=1 00:06:08.711 --rc geninfo_unexecuted_blocks=1 00:06:08.711 00:06:08.711 ' 00:06:08.711 11:09:45 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.711 --rc genhtml_branch_coverage=1 00:06:08.711 --rc genhtml_function_coverage=1 00:06:08.711 --rc genhtml_legend=1 00:06:08.711 --rc geninfo_all_blocks=1 00:06:08.711 --rc geninfo_unexecuted_blocks=1 00:06:08.711 00:06:08.711 ' 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ebaabc51-4779-460f-bf0c-937daf1be927 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ebaabc51-4779-460f-bf0c-937daf1be927 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.711 11:09:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.711 11:09:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.711 11:09:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.711 11:09:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.711 11:09:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 11:09:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 11:09:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 11:09:45 json_config -- paths/export.sh@5 -- # export PATH 00:06:08.711 11:09:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@51 -- # : 0 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.711 11:09:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:08.711 WARNING: No tests are enabled so not running JSON configuration tests 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:08.711 11:09:45 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:08.711 00:06:08.711 real 0m0.225s 00:06:08.711 user 0m0.129s 00:06:08.711 sys 0m0.106s 00:06:08.711 11:09:45 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.711 11:09:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.711 ************************************ 00:06:08.711 END TEST json_config 00:06:08.711 ************************************ 00:06:08.711 11:09:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:08.711 11:09:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.711 11:09:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.711 11:09:45 -- common/autotest_common.sh@10 -- # set +x 00:06:08.711 ************************************ 00:06:08.712 START TEST json_config_extra_key 00:06:08.712 ************************************ 00:06:08.712 11:09:45 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:08.712 11:09:46 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.712 11:09:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.712 11:09:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.971 11:09:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:08.971 11:09:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.971 11:09:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.971 --rc genhtml_branch_coverage=1 00:06:08.971 --rc genhtml_function_coverage=1 00:06:08.971 --rc genhtml_legend=1 00:06:08.971 --rc geninfo_all_blocks=1 00:06:08.971 --rc geninfo_unexecuted_blocks=1 00:06:08.971 00:06:08.971 ' 00:06:08.971 11:09:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.971 --rc genhtml_branch_coverage=1 00:06:08.971 --rc genhtml_function_coverage=1 00:06:08.971 --rc genhtml_legend=1 00:06:08.971 --rc geninfo_all_blocks=1 00:06:08.971 --rc geninfo_unexecuted_blocks=1 00:06:08.971 00:06:08.971 ' 00:06:08.971 11:09:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.971 --rc genhtml_branch_coverage=1 00:06:08.971 --rc genhtml_function_coverage=1 00:06:08.971 --rc genhtml_legend=1 00:06:08.971 --rc geninfo_all_blocks=1 00:06:08.971 --rc geninfo_unexecuted_blocks=1 00:06:08.971 00:06:08.971 ' 00:06:08.971 11:09:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.971 --rc genhtml_branch_coverage=1 00:06:08.971 --rc genhtml_function_coverage=1 00:06:08.971 --rc genhtml_legend=1 00:06:08.971 --rc geninfo_all_blocks=1 00:06:08.971 --rc geninfo_unexecuted_blocks=1 00:06:08.971 00:06:08.971 ' 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ebaabc51-4779-460f-bf0c-937daf1be927 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ebaabc51-4779-460f-bf0c-937daf1be927 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.971 11:09:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.971 11:09:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.971 11:09:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.971 11:09:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.971 11:09:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:08.971 11:09:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.971 11:09:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:08.971 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:08.972 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:08.972 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.972 INFO: launching applications... 00:06:08.972 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:08.972 11:09:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58553 00:06:08.972 Waiting for target to run... 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58553 /var/tmp/spdk_tgt.sock 00:06:08.972 11:09:46 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58553 ']' 00:06:08.972 11:09:46 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.972 11:09:46 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.972 11:09:46 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.972 11:09:46 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:08.972 11:09:46 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.972 11:09:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.972 [2024-11-15 11:09:46.296627] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:08.972 [2024-11-15 11:09:46.296759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58553 ] 00:06:09.540 [2024-11-15 11:09:46.693507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.540 [2024-11-15 11:09:46.813505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.477 11:09:47 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.477 11:09:47 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:10.477 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:10.477 INFO: shutting down applications... 00:06:10.477 11:09:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:10.477 11:09:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58553 ]] 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58553 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58553 00:06:10.477 11:09:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.735 11:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.735 11:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.735 11:09:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58553 00:06:10.735 11:09:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.304 11:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.304 11:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.304 11:09:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58553 00:06:11.304 11:09:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.871 11:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.871 11:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.871 11:09:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58553 00:06:11.871 11:09:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.459 11:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.459 11:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.459 11:09:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58553 00:06:12.459 11:09:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.718 11:09:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.718 11:09:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.718 11:09:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58553 00:06:12.718 11:09:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.292 11:09:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.292 11:09:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.292 11:09:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58553 00:06:13.292 11:09:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.292 SPDK target shutdown done 00:06:13.292 11:09:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.292 11:09:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.292 11:09:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.292 Success 00:06:13.292 11:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.292 ************************************ 00:06:13.292 END TEST json_config_extra_key 00:06:13.292 ************************************ 00:06:13.292 00:06:13.292 real 0m4.647s 00:06:13.292 user 0m4.334s 00:06:13.292 sys 0m0.635s 00:06:13.292 11:09:50 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.292 11:09:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.292 11:09:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.292 11:09:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:13.292 11:09:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.292 11:09:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.292 ************************************ 00:06:13.292 START TEST alias_rpc 00:06:13.292 ************************************ 00:06:13.292 11:09:50 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.551 * Looking for test storage... 00:06:13.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.551 11:09:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.551 --rc genhtml_branch_coverage=1 00:06:13.551 --rc genhtml_function_coverage=1 00:06:13.551 --rc genhtml_legend=1 00:06:13.551 --rc geninfo_all_blocks=1 00:06:13.551 --rc geninfo_unexecuted_blocks=1 00:06:13.551 00:06:13.551 ' 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.551 --rc genhtml_branch_coverage=1 00:06:13.551 --rc genhtml_function_coverage=1 00:06:13.551 --rc genhtml_legend=1 00:06:13.551 --rc geninfo_all_blocks=1 00:06:13.551 --rc geninfo_unexecuted_blocks=1 00:06:13.551 00:06:13.551 ' 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.551 --rc genhtml_branch_coverage=1 00:06:13.551 --rc genhtml_function_coverage=1 00:06:13.551 --rc genhtml_legend=1 00:06:13.551 --rc geninfo_all_blocks=1 00:06:13.551 --rc geninfo_unexecuted_blocks=1 00:06:13.551 00:06:13.551 ' 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.551 --rc genhtml_branch_coverage=1 00:06:13.551 --rc genhtml_function_coverage=1 00:06:13.551 --rc genhtml_legend=1 00:06:13.551 --rc geninfo_all_blocks=1 00:06:13.551 --rc geninfo_unexecuted_blocks=1 00:06:13.551 00:06:13.551 ' 00:06:13.551 11:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.551 11:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58659 00:06:13.551 11:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.551 11:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58659 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58659 ']' 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.551 11:09:50 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.552 11:09:50 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.552 11:09:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.810 [2024-11-15 11:09:51.034883] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:13.810 [2024-11-15 11:09:51.035169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58659 ] 00:06:14.069 [2024-11-15 11:09:51.216127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.069 [2024-11-15 11:09:51.356757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.005 11:09:52 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.005 11:09:52 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:15.005 11:09:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:15.263 11:09:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58659 00:06:15.263 11:09:52 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58659 ']' 00:06:15.263 11:09:52 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58659 00:06:15.263 11:09:52 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:15.263 11:09:52 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.263 11:09:52 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58659 00:06:15.521 killing process with pid 58659 00:06:15.521 11:09:52 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.521 11:09:52 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.521 11:09:52 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58659' 00:06:15.521 11:09:52 alias_rpc -- common/autotest_common.sh@971 -- # kill 58659 00:06:15.521 11:09:52 alias_rpc -- common/autotest_common.sh@976 -- # wait 58659 00:06:18.053 ************************************ 00:06:18.053 END TEST alias_rpc 00:06:18.053 ************************************ 00:06:18.053 00:06:18.053 real 0m4.629s 00:06:18.053 user 0m4.427s 00:06:18.053 sys 0m0.787s 00:06:18.053 11:09:55 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.053 11:09:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.053 11:09:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:18.053 11:09:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:18.053 11:09:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.053 11:09:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.053 11:09:55 -- common/autotest_common.sh@10 -- # set +x 00:06:18.053 ************************************ 00:06:18.053 START TEST spdkcli_tcp 00:06:18.053 ************************************ 00:06:18.053 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:18.311 * Looking for test storage... 00:06:18.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:18.311 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.311 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.311 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.311 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:18.311 11:09:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.312 11:09:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.312 --rc genhtml_branch_coverage=1 00:06:18.312 --rc genhtml_function_coverage=1 00:06:18.312 --rc genhtml_legend=1 00:06:18.312 --rc geninfo_all_blocks=1 00:06:18.312 --rc geninfo_unexecuted_blocks=1 00:06:18.312 00:06:18.312 ' 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.312 --rc genhtml_branch_coverage=1 00:06:18.312 --rc genhtml_function_coverage=1 00:06:18.312 --rc genhtml_legend=1 00:06:18.312 --rc geninfo_all_blocks=1 00:06:18.312 --rc geninfo_unexecuted_blocks=1 00:06:18.312 00:06:18.312 ' 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.312 --rc genhtml_branch_coverage=1 00:06:18.312 --rc genhtml_function_coverage=1 00:06:18.312 --rc genhtml_legend=1 00:06:18.312 --rc geninfo_all_blocks=1 00:06:18.312 --rc geninfo_unexecuted_blocks=1 00:06:18.312 00:06:18.312 ' 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.312 --rc genhtml_branch_coverage=1 00:06:18.312 --rc genhtml_function_coverage=1 00:06:18.312 --rc genhtml_legend=1 00:06:18.312 --rc geninfo_all_blocks=1 00:06:18.312 --rc geninfo_unexecuted_blocks=1 00:06:18.312 00:06:18.312 ' 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58777 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:18.312 11:09:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58777 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58777 ']' 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.312 11:09:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.571 [2024-11-15 11:09:55.747238] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:18.571 [2024-11-15 11:09:55.747372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58777 ] 00:06:18.571 [2024-11-15 11:09:55.931285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.829 [2024-11-15 11:09:56.060086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.829 [2024-11-15 11:09:56.060121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.765 11:09:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.765 11:09:57 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:19.765 11:09:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58794 00:06:19.765 11:09:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:19.765 11:09:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:20.025 [ 00:06:20.025 "bdev_malloc_delete", 00:06:20.025 "bdev_malloc_create", 00:06:20.025 "bdev_null_resize", 00:06:20.025 "bdev_null_delete", 00:06:20.025 "bdev_null_create", 00:06:20.025 "bdev_nvme_cuse_unregister", 00:06:20.025 "bdev_nvme_cuse_register", 00:06:20.025 "bdev_opal_new_user", 00:06:20.025 "bdev_opal_set_lock_state", 00:06:20.025 "bdev_opal_delete", 00:06:20.025 "bdev_opal_get_info", 00:06:20.025 "bdev_opal_create", 00:06:20.025 "bdev_nvme_opal_revert", 00:06:20.025 "bdev_nvme_opal_init", 00:06:20.025 "bdev_nvme_send_cmd", 00:06:20.025 "bdev_nvme_set_keys", 00:06:20.025 "bdev_nvme_get_path_iostat", 00:06:20.025 "bdev_nvme_get_mdns_discovery_info", 00:06:20.025 "bdev_nvme_stop_mdns_discovery", 00:06:20.025 "bdev_nvme_start_mdns_discovery", 00:06:20.025 "bdev_nvme_set_multipath_policy", 00:06:20.025 "bdev_nvme_set_preferred_path", 00:06:20.025 "bdev_nvme_get_io_paths", 00:06:20.025 "bdev_nvme_remove_error_injection", 00:06:20.025 "bdev_nvme_add_error_injection", 00:06:20.025 "bdev_nvme_get_discovery_info", 00:06:20.025 "bdev_nvme_stop_discovery", 00:06:20.025 "bdev_nvme_start_discovery", 00:06:20.025 "bdev_nvme_get_controller_health_info", 00:06:20.025 "bdev_nvme_disable_controller", 00:06:20.025 "bdev_nvme_enable_controller", 00:06:20.025 "bdev_nvme_reset_controller", 00:06:20.025 "bdev_nvme_get_transport_statistics", 00:06:20.025 "bdev_nvme_apply_firmware", 00:06:20.025 "bdev_nvme_detach_controller", 00:06:20.025 "bdev_nvme_get_controllers", 00:06:20.025 "bdev_nvme_attach_controller", 00:06:20.025 "bdev_nvme_set_hotplug", 00:06:20.025 "bdev_nvme_set_options", 00:06:20.025 "bdev_passthru_delete", 00:06:20.025 "bdev_passthru_create", 00:06:20.025 "bdev_lvol_set_parent_bdev", 00:06:20.025 "bdev_lvol_set_parent", 00:06:20.025 "bdev_lvol_check_shallow_copy", 00:06:20.025 "bdev_lvol_start_shallow_copy", 00:06:20.025 "bdev_lvol_grow_lvstore", 00:06:20.025 "bdev_lvol_get_lvols", 00:06:20.025 "bdev_lvol_get_lvstores", 00:06:20.025 "bdev_lvol_delete", 00:06:20.025 "bdev_lvol_set_read_only", 00:06:20.025 "bdev_lvol_resize", 00:06:20.025 "bdev_lvol_decouple_parent", 00:06:20.025 "bdev_lvol_inflate", 00:06:20.025 "bdev_lvol_rename", 00:06:20.025 "bdev_lvol_clone_bdev", 00:06:20.025 "bdev_lvol_clone", 00:06:20.025 "bdev_lvol_snapshot", 00:06:20.025 "bdev_lvol_create", 00:06:20.025 "bdev_lvol_delete_lvstore", 00:06:20.025 "bdev_lvol_rename_lvstore", 00:06:20.025 "bdev_lvol_create_lvstore", 00:06:20.025 "bdev_raid_set_options", 00:06:20.025 "bdev_raid_remove_base_bdev", 00:06:20.025 "bdev_raid_add_base_bdev", 00:06:20.025 "bdev_raid_delete", 00:06:20.025 "bdev_raid_create", 00:06:20.025 "bdev_raid_get_bdevs", 00:06:20.025 "bdev_error_inject_error", 00:06:20.025 "bdev_error_delete", 00:06:20.025 "bdev_error_create", 00:06:20.025 "bdev_split_delete", 00:06:20.025 "bdev_split_create", 00:06:20.025 "bdev_delay_delete", 00:06:20.025 "bdev_delay_create", 00:06:20.025 "bdev_delay_update_latency", 00:06:20.025 "bdev_zone_block_delete", 00:06:20.025 "bdev_zone_block_create", 00:06:20.025 "blobfs_create", 00:06:20.025 "blobfs_detect", 00:06:20.025 "blobfs_set_cache_size", 00:06:20.025 "bdev_xnvme_delete", 00:06:20.025 "bdev_xnvme_create", 00:06:20.025 "bdev_aio_delete", 00:06:20.025 "bdev_aio_rescan", 00:06:20.025 "bdev_aio_create", 00:06:20.025 "bdev_ftl_set_property", 00:06:20.025 "bdev_ftl_get_properties", 00:06:20.025 "bdev_ftl_get_stats", 00:06:20.025 "bdev_ftl_unmap", 00:06:20.025 "bdev_ftl_unload", 00:06:20.025 "bdev_ftl_delete", 00:06:20.025 "bdev_ftl_load", 00:06:20.025 "bdev_ftl_create", 00:06:20.025 "bdev_virtio_attach_controller", 00:06:20.025 "bdev_virtio_scsi_get_devices", 00:06:20.025 "bdev_virtio_detach_controller", 00:06:20.025 "bdev_virtio_blk_set_hotplug", 00:06:20.025 "bdev_iscsi_delete", 00:06:20.025 "bdev_iscsi_create", 00:06:20.025 "bdev_iscsi_set_options", 00:06:20.025 "accel_error_inject_error", 00:06:20.025 "ioat_scan_accel_module", 00:06:20.025 "dsa_scan_accel_module", 00:06:20.025 "iaa_scan_accel_module", 00:06:20.025 "keyring_file_remove_key", 00:06:20.025 "keyring_file_add_key", 00:06:20.025 "keyring_linux_set_options", 00:06:20.025 "fsdev_aio_delete", 00:06:20.025 "fsdev_aio_create", 00:06:20.025 "iscsi_get_histogram", 00:06:20.025 "iscsi_enable_histogram", 00:06:20.025 "iscsi_set_options", 00:06:20.025 "iscsi_get_auth_groups", 00:06:20.025 "iscsi_auth_group_remove_secret", 00:06:20.025 "iscsi_auth_group_add_secret", 00:06:20.025 "iscsi_delete_auth_group", 00:06:20.025 "iscsi_create_auth_group", 00:06:20.025 "iscsi_set_discovery_auth", 00:06:20.025 "iscsi_get_options", 00:06:20.025 "iscsi_target_node_request_logout", 00:06:20.025 "iscsi_target_node_set_redirect", 00:06:20.025 "iscsi_target_node_set_auth", 00:06:20.025 "iscsi_target_node_add_lun", 00:06:20.025 "iscsi_get_stats", 00:06:20.025 "iscsi_get_connections", 00:06:20.025 "iscsi_portal_group_set_auth", 00:06:20.025 "iscsi_start_portal_group", 00:06:20.025 "iscsi_delete_portal_group", 00:06:20.025 "iscsi_create_portal_group", 00:06:20.025 "iscsi_get_portal_groups", 00:06:20.025 "iscsi_delete_target_node", 00:06:20.025 "iscsi_target_node_remove_pg_ig_maps", 00:06:20.025 "iscsi_target_node_add_pg_ig_maps", 00:06:20.025 "iscsi_create_target_node", 00:06:20.025 "iscsi_get_target_nodes", 00:06:20.025 "iscsi_delete_initiator_group", 00:06:20.025 "iscsi_initiator_group_remove_initiators", 00:06:20.025 "iscsi_initiator_group_add_initiators", 00:06:20.025 "iscsi_create_initiator_group", 00:06:20.025 "iscsi_get_initiator_groups", 00:06:20.025 "nvmf_set_crdt", 00:06:20.025 "nvmf_set_config", 00:06:20.025 "nvmf_set_max_subsystems", 00:06:20.025 "nvmf_stop_mdns_prr", 00:06:20.025 "nvmf_publish_mdns_prr", 00:06:20.025 "nvmf_subsystem_get_listeners", 00:06:20.025 "nvmf_subsystem_get_qpairs", 00:06:20.025 "nvmf_subsystem_get_controllers", 00:06:20.025 "nvmf_get_stats", 00:06:20.025 "nvmf_get_transports", 00:06:20.025 "nvmf_create_transport", 00:06:20.025 "nvmf_get_targets", 00:06:20.025 "nvmf_delete_target", 00:06:20.025 "nvmf_create_target", 00:06:20.025 "nvmf_subsystem_allow_any_host", 00:06:20.025 "nvmf_subsystem_set_keys", 00:06:20.025 "nvmf_subsystem_remove_host", 00:06:20.025 "nvmf_subsystem_add_host", 00:06:20.025 "nvmf_ns_remove_host", 00:06:20.025 "nvmf_ns_add_host", 00:06:20.025 "nvmf_subsystem_remove_ns", 00:06:20.025 "nvmf_subsystem_set_ns_ana_group", 00:06:20.025 "nvmf_subsystem_add_ns", 00:06:20.025 "nvmf_subsystem_listener_set_ana_state", 00:06:20.025 "nvmf_discovery_get_referrals", 00:06:20.025 "nvmf_discovery_remove_referral", 00:06:20.025 "nvmf_discovery_add_referral", 00:06:20.025 "nvmf_subsystem_remove_listener", 00:06:20.025 "nvmf_subsystem_add_listener", 00:06:20.026 "nvmf_delete_subsystem", 00:06:20.026 "nvmf_create_subsystem", 00:06:20.026 "nvmf_get_subsystems", 00:06:20.026 "env_dpdk_get_mem_stats", 00:06:20.026 "nbd_get_disks", 00:06:20.026 "nbd_stop_disk", 00:06:20.026 "nbd_start_disk", 00:06:20.026 "ublk_recover_disk", 00:06:20.026 "ublk_get_disks", 00:06:20.026 "ublk_stop_disk", 00:06:20.026 "ublk_start_disk", 00:06:20.026 "ublk_destroy_target", 00:06:20.026 "ublk_create_target", 00:06:20.026 "virtio_blk_create_transport", 00:06:20.026 "virtio_blk_get_transports", 00:06:20.026 "vhost_controller_set_coalescing", 00:06:20.026 "vhost_get_controllers", 00:06:20.026 "vhost_delete_controller", 00:06:20.026 "vhost_create_blk_controller", 00:06:20.026 "vhost_scsi_controller_remove_target", 00:06:20.026 "vhost_scsi_controller_add_target", 00:06:20.026 "vhost_start_scsi_controller", 00:06:20.026 "vhost_create_scsi_controller", 00:06:20.026 "thread_set_cpumask", 00:06:20.026 "scheduler_set_options", 00:06:20.026 "framework_get_governor", 00:06:20.026 "framework_get_scheduler", 00:06:20.026 "framework_set_scheduler", 00:06:20.026 "framework_get_reactors", 00:06:20.026 "thread_get_io_channels", 00:06:20.026 "thread_get_pollers", 00:06:20.026 "thread_get_stats", 00:06:20.026 "framework_monitor_context_switch", 00:06:20.026 "spdk_kill_instance", 00:06:20.026 "log_enable_timestamps", 00:06:20.026 "log_get_flags", 00:06:20.026 "log_clear_flag", 00:06:20.026 "log_set_flag", 00:06:20.026 "log_get_level", 00:06:20.026 "log_set_level", 00:06:20.026 "log_get_print_level", 00:06:20.026 "log_set_print_level", 00:06:20.026 "framework_enable_cpumask_locks", 00:06:20.026 "framework_disable_cpumask_locks", 00:06:20.026 "framework_wait_init", 00:06:20.026 "framework_start_init", 00:06:20.026 "scsi_get_devices", 00:06:20.026 "bdev_get_histogram", 00:06:20.026 "bdev_enable_histogram", 00:06:20.026 "bdev_set_qos_limit", 00:06:20.026 "bdev_set_qd_sampling_period", 00:06:20.026 "bdev_get_bdevs", 00:06:20.026 "bdev_reset_iostat", 00:06:20.026 "bdev_get_iostat", 00:06:20.026 "bdev_examine", 00:06:20.026 "bdev_wait_for_examine", 00:06:20.026 "bdev_set_options", 00:06:20.026 "accel_get_stats", 00:06:20.026 "accel_set_options", 00:06:20.026 "accel_set_driver", 00:06:20.026 "accel_crypto_key_destroy", 00:06:20.026 "accel_crypto_keys_get", 00:06:20.026 "accel_crypto_key_create", 00:06:20.026 "accel_assign_opc", 00:06:20.026 "accel_get_module_info", 00:06:20.026 "accel_get_opc_assignments", 00:06:20.026 "vmd_rescan", 00:06:20.026 "vmd_remove_device", 00:06:20.026 "vmd_enable", 00:06:20.026 "sock_get_default_impl", 00:06:20.026 "sock_set_default_impl", 00:06:20.026 "sock_impl_set_options", 00:06:20.026 "sock_impl_get_options", 00:06:20.026 "iobuf_get_stats", 00:06:20.026 "iobuf_set_options", 00:06:20.026 "keyring_get_keys", 00:06:20.026 "framework_get_pci_devices", 00:06:20.026 "framework_get_config", 00:06:20.026 "framework_get_subsystems", 00:06:20.026 "fsdev_set_opts", 00:06:20.026 "fsdev_get_opts", 00:06:20.026 "trace_get_info", 00:06:20.026 "trace_get_tpoint_group_mask", 00:06:20.026 "trace_disable_tpoint_group", 00:06:20.026 "trace_enable_tpoint_group", 00:06:20.026 "trace_clear_tpoint_mask", 00:06:20.026 "trace_set_tpoint_mask", 00:06:20.026 "notify_get_notifications", 00:06:20.026 "notify_get_types", 00:06:20.026 "spdk_get_version", 00:06:20.026 "rpc_get_methods" 00:06:20.026 ] 00:06:20.026 11:09:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.026 11:09:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:20.026 11:09:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58777 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58777 ']' 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58777 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58777 00:06:20.026 killing process with pid 58777 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58777' 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58777 00:06:20.026 11:09:57 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58777 00:06:22.560 ************************************ 00:06:22.560 END TEST spdkcli_tcp 00:06:22.560 ************************************ 00:06:22.560 00:06:22.560 real 0m4.562s 00:06:22.560 user 0m7.930s 00:06:22.560 sys 0m0.818s 00:06:22.560 11:09:59 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.560 11:09:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 11:10:00 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.819 11:10:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.819 11:10:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.819 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 ************************************ 00:06:22.819 START TEST dpdk_mem_utility 00:06:22.819 ************************************ 00:06:22.819 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.819 * Looking for test storage... 00:06:22.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:22.819 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.819 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.819 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.078 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.078 11:10:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.079 11:10:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.079 --rc genhtml_branch_coverage=1 00:06:23.079 --rc genhtml_function_coverage=1 00:06:23.079 --rc genhtml_legend=1 00:06:23.079 --rc geninfo_all_blocks=1 00:06:23.079 --rc geninfo_unexecuted_blocks=1 00:06:23.079 00:06:23.079 ' 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.079 --rc genhtml_branch_coverage=1 00:06:23.079 --rc genhtml_function_coverage=1 00:06:23.079 --rc genhtml_legend=1 00:06:23.079 --rc geninfo_all_blocks=1 00:06:23.079 --rc geninfo_unexecuted_blocks=1 00:06:23.079 00:06:23.079 ' 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.079 --rc genhtml_branch_coverage=1 00:06:23.079 --rc genhtml_function_coverage=1 00:06:23.079 --rc genhtml_legend=1 00:06:23.079 --rc geninfo_all_blocks=1 00:06:23.079 --rc geninfo_unexecuted_blocks=1 00:06:23.079 00:06:23.079 ' 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.079 --rc genhtml_branch_coverage=1 00:06:23.079 --rc genhtml_function_coverage=1 00:06:23.079 --rc genhtml_legend=1 00:06:23.079 --rc geninfo_all_blocks=1 00:06:23.079 --rc geninfo_unexecuted_blocks=1 00:06:23.079 00:06:23.079 ' 00:06:23.079 11:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:23.079 11:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58899 00:06:23.079 11:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.079 11:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58899 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58899 ']' 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.079 11:10:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.079 [2024-11-15 11:10:00.384123] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:23.079 [2024-11-15 11:10:00.384469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58899 ] 00:06:23.338 [2024-11-15 11:10:00.569171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.338 [2024-11-15 11:10:00.711633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.721 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.721 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:24.721 11:10:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:24.721 11:10:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:24.721 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.721 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.721 { 00:06:24.721 "filename": "/tmp/spdk_mem_dump.txt" 00:06:24.721 } 00:06:24.721 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.721 11:10:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:24.721 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:24.721 1 heaps totaling size 824.000000 MiB 00:06:24.721 size: 824.000000 MiB heap id: 0 00:06:24.721 end heaps---------- 00:06:24.721 9 mempools totaling size 603.782043 MiB 00:06:24.721 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:24.721 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:24.721 size: 100.555481 MiB name: bdev_io_58899 00:06:24.721 size: 50.003479 MiB name: msgpool_58899 00:06:24.721 size: 36.509338 MiB name: fsdev_io_58899 00:06:24.721 size: 21.763794 MiB name: PDU_Pool 00:06:24.721 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:24.721 size: 4.133484 MiB name: evtpool_58899 00:06:24.721 size: 0.026123 MiB name: Session_Pool 00:06:24.721 end mempools------- 00:06:24.721 6 memzones totaling size 4.142822 MiB 00:06:24.721 size: 1.000366 MiB name: RG_ring_0_58899 00:06:24.721 size: 1.000366 MiB name: RG_ring_1_58899 00:06:24.721 size: 1.000366 MiB name: RG_ring_4_58899 00:06:24.721 size: 1.000366 MiB name: RG_ring_5_58899 00:06:24.722 size: 0.125366 MiB name: RG_ring_2_58899 00:06:24.722 size: 0.015991 MiB name: RG_ring_3_58899 00:06:24.722 end memzones------- 00:06:24.722 11:10:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:24.722 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:06:24.722 list of free elements. size: 16.780151 MiB 00:06:24.722 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:24.722 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:24.722 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:24.722 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:24.722 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:24.722 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:24.722 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:24.722 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:24.722 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:24.722 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:24.722 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:24.722 element at address: 0x20001b400000 with size: 0.561707 MiB 00:06:24.722 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:24.722 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:24.722 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:24.722 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:24.722 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:24.722 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:24.722 list of standard malloc elements. size: 199.288940 MiB 00:06:24.722 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:24.722 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:24.722 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:24.722 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:24.722 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:24.722 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:24.722 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:24.722 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:24.722 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:24.722 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:24.722 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:24.722 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:24.722 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:24.722 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:24.723 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:24.724 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:24.724 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:24.724 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:24.725 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:24.725 list of memzone associated elements. size: 607.930908 MiB 00:06:24.725 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:24.725 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:24.725 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:24.725 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:24.725 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:24.725 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58899_0 00:06:24.725 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:24.725 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58899_0 00:06:24.725 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:24.725 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58899_0 00:06:24.725 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:24.725 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:24.725 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:24.725 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:24.725 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:24.725 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58899_0 00:06:24.725 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:24.725 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58899 00:06:24.725 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:24.725 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58899 00:06:24.725 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:24.725 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:24.725 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:24.725 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:24.725 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:24.725 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:24.725 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:24.725 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:24.725 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:24.725 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58899 00:06:24.725 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:24.725 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58899 00:06:24.725 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:24.725 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58899 00:06:24.725 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:24.725 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58899 00:06:24.725 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:24.725 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58899 00:06:24.725 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:24.725 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58899 00:06:24.725 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:24.725 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:24.725 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:24.725 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:24.725 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:24.725 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:24.725 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:24.725 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58899 00:06:24.725 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:24.725 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58899 00:06:24.725 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:24.725 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:24.725 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:24.725 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:24.725 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:24.725 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58899 00:06:24.725 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:24.725 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:24.725 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:24.725 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58899 00:06:24.725 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:24.725 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58899 00:06:24.725 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:24.725 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58899 00:06:24.725 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:24.725 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:24.725 11:10:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:24.725 11:10:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58899 00:06:24.725 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58899 ']' 00:06:24.725 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58899 00:06:24.725 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:24.725 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.725 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58899 00:06:24.725 killing process with pid 58899 00:06:24.726 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.726 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.726 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58899' 00:06:24.726 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58899 00:06:24.726 11:10:01 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58899 00:06:27.260 00:06:27.260 real 0m4.486s 00:06:27.260 user 0m4.175s 00:06:27.260 sys 0m0.769s 00:06:27.260 11:10:04 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.260 ************************************ 00:06:27.260 END TEST dpdk_mem_utility 00:06:27.260 ************************************ 00:06:27.260 11:10:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.260 11:10:04 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:27.260 11:10:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.260 11:10:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.260 11:10:04 -- common/autotest_common.sh@10 -- # set +x 00:06:27.260 ************************************ 00:06:27.260 START TEST event 00:06:27.260 ************************************ 00:06:27.260 11:10:04 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:27.521 * Looking for test storage... 00:06:27.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:27.521 11:10:04 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.521 11:10:04 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.521 11:10:04 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.521 11:10:04 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.521 11:10:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.521 11:10:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.521 11:10:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.521 11:10:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.521 11:10:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.521 11:10:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.521 11:10:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.521 11:10:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.521 11:10:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.521 11:10:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.521 11:10:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.521 11:10:04 event -- scripts/common.sh@344 -- # case "$op" in 00:06:27.521 11:10:04 event -- scripts/common.sh@345 -- # : 1 00:06:27.521 11:10:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.521 11:10:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.521 11:10:04 event -- scripts/common.sh@365 -- # decimal 1 00:06:27.521 11:10:04 event -- scripts/common.sh@353 -- # local d=1 00:06:27.521 11:10:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.521 11:10:04 event -- scripts/common.sh@355 -- # echo 1 00:06:27.521 11:10:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.521 11:10:04 event -- scripts/common.sh@366 -- # decimal 2 00:06:27.521 11:10:04 event -- scripts/common.sh@353 -- # local d=2 00:06:27.521 11:10:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.521 11:10:04 event -- scripts/common.sh@355 -- # echo 2 00:06:27.521 11:10:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.521 11:10:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.521 11:10:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.521 11:10:04 event -- scripts/common.sh@368 -- # return 0 00:06:27.521 11:10:04 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.521 11:10:04 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.521 --rc genhtml_branch_coverage=1 00:06:27.521 --rc genhtml_function_coverage=1 00:06:27.521 --rc genhtml_legend=1 00:06:27.521 --rc geninfo_all_blocks=1 00:06:27.521 --rc geninfo_unexecuted_blocks=1 00:06:27.521 00:06:27.521 ' 00:06:27.522 11:10:04 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.522 --rc genhtml_branch_coverage=1 00:06:27.522 --rc genhtml_function_coverage=1 00:06:27.522 --rc genhtml_legend=1 00:06:27.522 --rc geninfo_all_blocks=1 00:06:27.522 --rc geninfo_unexecuted_blocks=1 00:06:27.522 00:06:27.522 ' 00:06:27.522 11:10:04 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.522 --rc genhtml_branch_coverage=1 00:06:27.522 --rc genhtml_function_coverage=1 00:06:27.522 --rc genhtml_legend=1 00:06:27.522 --rc geninfo_all_blocks=1 00:06:27.522 --rc geninfo_unexecuted_blocks=1 00:06:27.522 00:06:27.522 ' 00:06:27.522 11:10:04 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.522 --rc genhtml_branch_coverage=1 00:06:27.522 --rc genhtml_function_coverage=1 00:06:27.522 --rc genhtml_legend=1 00:06:27.522 --rc geninfo_all_blocks=1 00:06:27.522 --rc geninfo_unexecuted_blocks=1 00:06:27.522 00:06:27.522 ' 00:06:27.522 11:10:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:27.522 11:10:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:27.522 11:10:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.522 11:10:04 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:27.522 11:10:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.522 11:10:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.522 ************************************ 00:06:27.522 START TEST event_perf 00:06:27.522 ************************************ 00:06:27.522 11:10:04 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.522 Running I/O for 1 seconds...[2024-11-15 11:10:04.909026] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:27.522 [2024-11-15 11:10:04.909230] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59013 ] 00:06:27.782 [2024-11-15 11:10:05.093194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.041 [2024-11-15 11:10:05.243141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.041 [2024-11-15 11:10:05.243317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.041 Running I/O for 1 seconds...[2024-11-15 11:10:05.243443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.041 [2024-11-15 11:10:05.243527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.419 00:06:29.419 lcore 0: 110773 00:06:29.419 lcore 1: 110774 00:06:29.419 lcore 2: 110775 00:06:29.419 lcore 3: 110773 00:06:29.419 done. 00:06:29.419 00:06:29.419 real 0m1.653s 00:06:29.419 user 0m4.373s 00:06:29.419 sys 0m0.155s 00:06:29.419 11:10:06 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.420 11:10:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.420 ************************************ 00:06:29.420 END TEST event_perf 00:06:29.420 ************************************ 00:06:29.420 11:10:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.420 11:10:06 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:29.420 11:10:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.420 11:10:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.420 ************************************ 00:06:29.420 START TEST event_reactor 00:06:29.420 ************************************ 00:06:29.420 11:10:06 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:29.420 [2024-11-15 11:10:06.644293] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:29.420 [2024-11-15 11:10:06.644599] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59058 ] 00:06:29.678 [2024-11-15 11:10:06.831901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.678 [2024-11-15 11:10:06.977533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.058 test_start 00:06:31.058 oneshot 00:06:31.058 tick 100 00:06:31.058 tick 100 00:06:31.058 tick 250 00:06:31.058 tick 100 00:06:31.058 tick 100 00:06:31.058 tick 100 00:06:31.058 tick 250 00:06:31.058 tick 500 00:06:31.058 tick 100 00:06:31.058 tick 100 00:06:31.058 tick 250 00:06:31.058 tick 100 00:06:31.058 tick 100 00:06:31.058 test_end 00:06:31.058 00:06:31.058 real 0m1.635s 00:06:31.058 user 0m1.401s 00:06:31.058 sys 0m0.126s 00:06:31.058 11:10:08 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.058 11:10:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:31.058 ************************************ 00:06:31.058 END TEST event_reactor 00:06:31.058 ************************************ 00:06:31.058 11:10:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.058 11:10:08 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:31.058 11:10:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.058 11:10:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.058 ************************************ 00:06:31.058 START TEST event_reactor_perf 00:06:31.058 ************************************ 00:06:31.058 11:10:08 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:31.058 [2024-11-15 11:10:08.348170] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:31.058 [2024-11-15 11:10:08.348408] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59094 ] 00:06:31.317 [2024-11-15 11:10:08.527954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.317 [2024-11-15 11:10:08.671991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.693 test_start 00:06:32.693 test_end 00:06:32.693 Performance: 380981 events per second 00:06:32.693 00:06:32.693 real 0m1.618s 00:06:32.693 user 0m1.386s 00:06:32.693 sys 0m0.122s 00:06:32.693 11:10:09 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.693 ************************************ 00:06:32.693 END TEST event_reactor_perf 00:06:32.693 ************************************ 00:06:32.693 11:10:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.693 11:10:09 event -- event/event.sh@49 -- # uname -s 00:06:32.693 11:10:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:32.693 11:10:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.693 11:10:09 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:32.693 11:10:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.693 11:10:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.693 ************************************ 00:06:32.693 START TEST event_scheduler 00:06:32.693 ************************************ 00:06:32.693 11:10:09 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:32.953 * Looking for test storage... 00:06:32.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.953 11:10:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:32.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.953 --rc genhtml_branch_coverage=1 00:06:32.953 --rc genhtml_function_coverage=1 00:06:32.953 --rc genhtml_legend=1 00:06:32.953 --rc geninfo_all_blocks=1 00:06:32.953 --rc geninfo_unexecuted_blocks=1 00:06:32.953 00:06:32.953 ' 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:32.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.953 --rc genhtml_branch_coverage=1 00:06:32.953 --rc genhtml_function_coverage=1 00:06:32.953 --rc genhtml_legend=1 00:06:32.953 --rc geninfo_all_blocks=1 00:06:32.953 --rc geninfo_unexecuted_blocks=1 00:06:32.953 00:06:32.953 ' 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:32.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.953 --rc genhtml_branch_coverage=1 00:06:32.953 --rc genhtml_function_coverage=1 00:06:32.953 --rc genhtml_legend=1 00:06:32.953 --rc geninfo_all_blocks=1 00:06:32.953 --rc geninfo_unexecuted_blocks=1 00:06:32.953 00:06:32.953 ' 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:32.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.953 --rc genhtml_branch_coverage=1 00:06:32.953 --rc genhtml_function_coverage=1 00:06:32.953 --rc genhtml_legend=1 00:06:32.953 --rc geninfo_all_blocks=1 00:06:32.953 --rc geninfo_unexecuted_blocks=1 00:06:32.953 00:06:32.953 ' 00:06:32.953 11:10:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:32.953 11:10:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:32.953 11:10:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59165 00:06:32.953 11:10:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.953 11:10:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59165 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59165 ']' 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.953 11:10:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.212 [2024-11-15 11:10:10.353570] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:33.213 [2024-11-15 11:10:10.353696] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 00:06:33.213 [2024-11-15 11:10:10.538676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.472 [2024-11-15 11:10:10.668244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.472 [2024-11-15 11:10:10.668376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.472 [2024-11-15 11:10:10.668460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.472 [2024-11-15 11:10:10.668488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.069 11:10:11 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.069 11:10:11 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:34.069 11:10:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.069 11:10:11 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.069 11:10:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.069 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.069 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.069 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.069 POWER: Cannot set governor of lcore 0 to performance 00:06:34.069 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.069 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.069 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.069 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.069 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:34.069 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:34.069 POWER: Unable to set Power Management Environment for lcore 0 00:06:34.069 [2024-11-15 11:10:11.254469] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:34.069 [2024-11-15 11:10:11.254520] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:34.069 [2024-11-15 11:10:11.254597] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.069 [2024-11-15 11:10:11.254689] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.069 [2024-11-15 11:10:11.254725] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.069 [2024-11-15 11:10:11.254836] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:34.069 11:10:11 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.069 11:10:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.069 11:10:11 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.069 11:10:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 [2024-11-15 11:10:11.597919] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.369 11:10:11 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.369 11:10:11 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.369 11:10:11 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 ************************************ 00:06:34.369 START TEST scheduler_create_thread 00:06:34.369 ************************************ 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 2 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 3 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 4 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 5 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 6 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 7 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 8 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 9 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.369 10 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.369 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.370 11:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.304 11:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.304 11:10:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:35.304 11:10:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:35.304 11:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.304 11:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.679 ************************************ 00:06:36.679 END TEST scheduler_create_thread 00:06:36.679 ************************************ 00:06:36.679 11:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.679 00:06:36.679 real 0m2.137s 00:06:36.679 user 0m0.021s 00:06:36.679 sys 0m0.012s 00:06:36.679 11:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.679 11:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.679 11:10:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:36.679 11:10:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59165 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59165 ']' 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59165 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59165 00:06:36.679 killing process with pid 59165 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59165' 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59165 00:06:36.679 11:10:13 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59165 00:06:36.938 [2024-11-15 11:10:14.230892] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:38.327 00:06:38.327 real 0m5.416s 00:06:38.327 user 0m9.025s 00:06:38.327 sys 0m0.571s 00:06:38.327 ************************************ 00:06:38.327 END TEST event_scheduler 00:06:38.327 ************************************ 00:06:38.327 11:10:15 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.327 11:10:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.327 11:10:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:38.327 11:10:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:38.327 11:10:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:38.327 11:10:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.327 11:10:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.327 ************************************ 00:06:38.327 START TEST app_repeat 00:06:38.327 ************************************ 00:06:38.327 11:10:15 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:38.327 Process app_repeat pid: 59271 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59271 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59271' 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.327 spdk_app_start Round 0 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:38.327 11:10:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59271 /var/tmp/spdk-nbd.sock 00:06:38.327 11:10:15 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59271 ']' 00:06:38.327 11:10:15 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.327 11:10:15 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.327 11:10:15 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.327 11:10:15 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.327 11:10:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.327 [2024-11-15 11:10:15.584034] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:38.327 [2024-11-15 11:10:15.584465] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:06:38.586 [2024-11-15 11:10:15.751715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.586 [2024-11-15 11:10:15.895907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.586 [2024-11-15 11:10:15.895936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.154 11:10:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:39.154 11:10:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:39.154 11:10:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.412 Malloc0 00:06:39.412 11:10:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.669 Malloc1 00:06:39.927 11:10:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.927 11:10:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.927 /dev/nbd0 00:06:40.185 11:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.185 11:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.185 1+0 records in 00:06:40.185 1+0 records out 00:06:40.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003643 s, 11.2 MB/s 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:40.185 11:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.185 11:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.185 11:10:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.185 /dev/nbd1 00:06:40.185 11:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.185 11:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:40.185 11:10:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.444 1+0 records in 00:06:40.444 1+0 records out 00:06:40.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040678 s, 10.1 MB/s 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:40.444 11:10:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.444 { 00:06:40.444 "nbd_device": "/dev/nbd0", 00:06:40.444 "bdev_name": "Malloc0" 00:06:40.444 }, 00:06:40.444 { 00:06:40.444 "nbd_device": "/dev/nbd1", 00:06:40.444 "bdev_name": "Malloc1" 00:06:40.444 } 00:06:40.444 ]' 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.444 { 00:06:40.444 "nbd_device": "/dev/nbd0", 00:06:40.444 "bdev_name": "Malloc0" 00:06:40.444 }, 00:06:40.444 { 00:06:40.444 "nbd_device": "/dev/nbd1", 00:06:40.444 "bdev_name": "Malloc1" 00:06:40.444 } 00:06:40.444 ]' 00:06:40.444 11:10:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.703 11:10:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.704 /dev/nbd1' 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.704 /dev/nbd1' 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.704 256+0 records in 00:06:40.704 256+0 records out 00:06:40.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124874 s, 84.0 MB/s 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.704 256+0 records in 00:06:40.704 256+0 records out 00:06:40.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031255 s, 33.5 MB/s 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.704 256+0 records in 00:06:40.704 256+0 records out 00:06:40.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034976 s, 30.0 MB/s 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.704 11:10:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.963 11:10:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.221 11:10:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.480 11:10:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.480 11:10:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.047 11:10:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.423 [2024-11-15 11:10:20.465674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.423 [2024-11-15 11:10:20.610222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.423 [2024-11-15 11:10:20.610224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.682 [2024-11-15 11:10:20.843944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.682 [2024-11-15 11:10:20.844051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.057 spdk_app_start Round 1 00:06:45.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.057 11:10:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.057 11:10:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:45.057 11:10:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59271 /var/tmp/spdk-nbd.sock 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59271 ']' 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.057 11:10:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:45.057 11:10:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.315 Malloc0 00:06:45.315 11:10:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.573 Malloc1 00:06:45.573 11:10:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.573 11:10:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.831 /dev/nbd0 00:06:45.831 11:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.831 11:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.831 11:10:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:45.831 11:10:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:45.831 11:10:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:45.831 11:10:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:45.831 11:10:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:45.831 11:10:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:45.832 11:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:45.832 11:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:45.832 11:10:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.832 1+0 records in 00:06:45.832 1+0 records out 00:06:45.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223676 s, 18.3 MB/s 00:06:45.832 11:10:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.090 11:10:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:46.090 11:10:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.090 11:10:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:46.090 11:10:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:46.090 11:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.090 11:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.090 11:10:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.348 /dev/nbd1 00:06:46.348 11:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.348 11:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.348 1+0 records in 00:06:46.348 1+0 records out 00:06:46.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418117 s, 9.8 MB/s 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:46.348 11:10:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:46.348 11:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.348 11:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.348 11:10:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.348 11:10:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.348 11:10:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.607 { 00:06:46.607 "nbd_device": "/dev/nbd0", 00:06:46.607 "bdev_name": "Malloc0" 00:06:46.607 }, 00:06:46.607 { 00:06:46.607 "nbd_device": "/dev/nbd1", 00:06:46.607 "bdev_name": "Malloc1" 00:06:46.607 } 00:06:46.607 ]' 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.607 { 00:06:46.607 "nbd_device": "/dev/nbd0", 00:06:46.607 "bdev_name": "Malloc0" 00:06:46.607 }, 00:06:46.607 { 00:06:46.607 "nbd_device": "/dev/nbd1", 00:06:46.607 "bdev_name": "Malloc1" 00:06:46.607 } 00:06:46.607 ]' 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.607 /dev/nbd1' 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.607 /dev/nbd1' 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.607 256+0 records in 00:06:46.607 256+0 records out 00:06:46.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137399 s, 76.3 MB/s 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.607 256+0 records in 00:06:46.607 256+0 records out 00:06:46.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308769 s, 34.0 MB/s 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.607 11:10:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.865 256+0 records in 00:06:46.866 256+0 records out 00:06:46.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035831 s, 29.3 MB/s 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.866 11:10:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.124 11:10:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.382 11:10:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.382 11:10:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.382 11:10:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.382 11:10:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.382 11:10:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.382 11:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.382 11:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.641 11:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.641 11:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.641 11:10:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.641 11:10:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.641 11:10:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.641 11:10:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.641 11:10:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.898 11:10:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.287 [2024-11-15 11:10:26.477405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.287 [2024-11-15 11:10:26.609488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.287 [2024-11-15 11:10:26.609534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.546 [2024-11-15 11:10:26.836783] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.546 [2024-11-15 11:10:26.836871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.924 spdk_app_start Round 2 00:06:50.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.924 11:10:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.924 11:10:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.924 11:10:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59271 /var/tmp/spdk-nbd.sock 00:06:50.924 11:10:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59271 ']' 00:06:50.924 11:10:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.924 11:10:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.924 11:10:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.924 11:10:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.924 11:10:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.183 11:10:28 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.183 11:10:28 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:51.183 11:10:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.442 Malloc0 00:06:51.442 11:10:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.701 Malloc1 00:06:51.701 11:10:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.701 11:10:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.701 11:10:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.701 11:10:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.701 11:10:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.702 11:10:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.960 /dev/nbd0 00:06:51.960 11:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.960 11:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.960 1+0 records in 00:06:51.960 1+0 records out 00:06:51.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042154 s, 9.7 MB/s 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:51.960 11:10:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:51.960 11:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.960 11:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.960 11:10:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.219 /dev/nbd1 00:06:52.219 11:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.219 11:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.219 1+0 records in 00:06:52.219 1+0 records out 00:06:52.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220091 s, 18.6 MB/s 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:52.219 11:10:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:52.219 11:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.219 11:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.219 11:10:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.219 11:10:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.219 11:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.477 11:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.477 { 00:06:52.477 "nbd_device": "/dev/nbd0", 00:06:52.477 "bdev_name": "Malloc0" 00:06:52.477 }, 00:06:52.477 { 00:06:52.477 "nbd_device": "/dev/nbd1", 00:06:52.477 "bdev_name": "Malloc1" 00:06:52.477 } 00:06:52.477 ]' 00:06:52.477 11:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.477 11:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.477 { 00:06:52.477 "nbd_device": "/dev/nbd0", 00:06:52.477 "bdev_name": "Malloc0" 00:06:52.477 }, 00:06:52.477 { 00:06:52.477 "nbd_device": "/dev/nbd1", 00:06:52.477 "bdev_name": "Malloc1" 00:06:52.477 } 00:06:52.477 ]' 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.478 /dev/nbd1' 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.478 /dev/nbd1' 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.478 256+0 records in 00:06:52.478 256+0 records out 00:06:52.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130311 s, 80.5 MB/s 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.478 256+0 records in 00:06:52.478 256+0 records out 00:06:52.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298859 s, 35.1 MB/s 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.478 11:10:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.736 256+0 records in 00:06:52.736 256+0 records out 00:06:52.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0446104 s, 23.5 MB/s 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.736 11:10:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.995 11:10:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.253 11:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.511 11:10:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.512 11:10:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.512 11:10:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.770 11:10:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.171 [2024-11-15 11:10:32.397201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.171 [2024-11-15 11:10:32.532097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.171 [2024-11-15 11:10:32.532104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.431 [2024-11-15 11:10:32.757784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.431 [2024-11-15 11:10:32.757869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.809 11:10:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59271 /var/tmp/spdk-nbd.sock 00:06:56.809 11:10:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59271 ']' 00:06:56.809 11:10:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.809 11:10:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.809 11:10:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.809 11:10:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.809 11:10:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:57.068 11:10:34 event.app_repeat -- event/event.sh@39 -- # killprocess 59271 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59271 ']' 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59271 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59271 00:06:57.068 killing process with pid 59271 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59271' 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59271 00:06:57.068 11:10:34 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59271 00:06:58.443 spdk_app_start is called in Round 0. 00:06:58.443 Shutdown signal received, stop current app iteration 00:06:58.443 Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 reinitialization... 00:06:58.443 spdk_app_start is called in Round 1. 00:06:58.443 Shutdown signal received, stop current app iteration 00:06:58.443 Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 reinitialization... 00:06:58.443 spdk_app_start is called in Round 2. 00:06:58.443 Shutdown signal received, stop current app iteration 00:06:58.443 Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 reinitialization... 00:06:58.443 spdk_app_start is called in Round 3. 00:06:58.443 Shutdown signal received, stop current app iteration 00:06:58.443 11:10:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:58.443 11:10:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:58.443 00:06:58.443 real 0m20.071s 00:06:58.443 user 0m42.255s 00:06:58.443 sys 0m3.600s 00:06:58.443 11:10:35 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.443 11:10:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.443 ************************************ 00:06:58.443 END TEST app_repeat 00:06:58.443 ************************************ 00:06:58.443 11:10:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:58.443 11:10:35 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:58.443 11:10:35 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:58.443 11:10:35 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.443 11:10:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.443 ************************************ 00:06:58.443 START TEST cpu_locks 00:06:58.443 ************************************ 00:06:58.443 11:10:35 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:58.443 * Looking for test storage... 00:06:58.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:58.443 11:10:35 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:58.443 11:10:35 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:58.443 11:10:35 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.703 11:10:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:58.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.703 --rc genhtml_branch_coverage=1 00:06:58.703 --rc genhtml_function_coverage=1 00:06:58.703 --rc genhtml_legend=1 00:06:58.703 --rc geninfo_all_blocks=1 00:06:58.703 --rc geninfo_unexecuted_blocks=1 00:06:58.703 00:06:58.703 ' 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:58.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.703 --rc genhtml_branch_coverage=1 00:06:58.703 --rc genhtml_function_coverage=1 00:06:58.703 --rc genhtml_legend=1 00:06:58.703 --rc geninfo_all_blocks=1 00:06:58.703 --rc geninfo_unexecuted_blocks=1 00:06:58.703 00:06:58.703 ' 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:58.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.703 --rc genhtml_branch_coverage=1 00:06:58.703 --rc genhtml_function_coverage=1 00:06:58.703 --rc genhtml_legend=1 00:06:58.703 --rc geninfo_all_blocks=1 00:06:58.703 --rc geninfo_unexecuted_blocks=1 00:06:58.703 00:06:58.703 ' 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:58.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.703 --rc genhtml_branch_coverage=1 00:06:58.703 --rc genhtml_function_coverage=1 00:06:58.703 --rc genhtml_legend=1 00:06:58.703 --rc geninfo_all_blocks=1 00:06:58.703 --rc geninfo_unexecuted_blocks=1 00:06:58.703 00:06:58.703 ' 00:06:58.703 11:10:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:58.703 11:10:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:58.703 11:10:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:58.703 11:10:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.703 11:10:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.703 ************************************ 00:06:58.703 START TEST default_locks 00:06:58.703 ************************************ 00:06:58.703 11:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:58.703 11:10:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59731 00:06:58.703 11:10:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.704 11:10:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59731 00:06:58.704 11:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59731 ']' 00:06:58.704 11:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.704 11:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.704 11:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.704 11:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.704 11:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.704 [2024-11-15 11:10:36.004503] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:06:58.704 [2024-11-15 11:10:36.004644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:06:58.963 [2024-11-15 11:10:36.186941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.963 [2024-11-15 11:10:36.330506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.339 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.339 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:00.339 11:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59731 00:07:00.339 11:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59731 00:07:00.339 11:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59731 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59731 ']' 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59731 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59731 00:07:00.598 killing process with pid 59731 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59731' 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59731 00:07:00.598 11:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59731 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59731 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59731 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59731 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59731 ']' 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.131 ERROR: process (pid: 59731) is no longer running 00:07:03.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59731) - No such process 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.131 ************************************ 00:07:03.131 END TEST default_locks 00:07:03.131 ************************************ 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.131 00:07:03.131 real 0m4.564s 00:07:03.131 user 0m4.377s 00:07:03.131 sys 0m0.854s 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.131 11:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.131 11:10:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:03.131 11:10:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.131 11:10:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.131 11:10:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.390 ************************************ 00:07:03.390 START TEST default_locks_via_rpc 00:07:03.390 ************************************ 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59806 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59806 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59806 ']' 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.390 11:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.390 [2024-11-15 11:10:40.648893] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:03.390 [2024-11-15 11:10:40.649041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59806 ] 00:07:03.649 [2024-11-15 11:10:40.830983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.649 [2024-11-15 11:10:40.986156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.025 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.025 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:05.025 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:05.025 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.025 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59806 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59806 00:07:05.026 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.284 11:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59806 00:07:05.284 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59806 ']' 00:07:05.284 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59806 00:07:05.284 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:05.284 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.284 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59806 00:07:05.542 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.542 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.542 killing process with pid 59806 00:07:05.542 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59806' 00:07:05.542 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59806 00:07:05.542 11:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59806 00:07:08.828 00:07:08.828 real 0m4.956s 00:07:08.828 user 0m4.738s 00:07:08.828 sys 0m0.920s 00:07:08.828 11:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.828 11:10:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.828 ************************************ 00:07:08.828 END TEST default_locks_via_rpc 00:07:08.828 ************************************ 00:07:08.828 11:10:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:08.828 11:10:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:08.828 11:10:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.828 11:10:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.828 ************************************ 00:07:08.828 START TEST non_locking_app_on_locked_coremask 00:07:08.828 ************************************ 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59892 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59892 /var/tmp/spdk.sock 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59892 ']' 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.828 11:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.828 [2024-11-15 11:10:45.676072] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:08.828 [2024-11-15 11:10:45.676195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59892 ] 00:07:08.828 [2024-11-15 11:10:45.842533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.828 [2024-11-15 11:10:45.985467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59914 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59914 /var/tmp/spdk2.sock 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59914 ']' 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.791 11:10:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.051 [2024-11-15 11:10:47.234111] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:10.051 [2024-11-15 11:10:47.234246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:07:10.051 [2024-11-15 11:10:47.420832] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.051 [2024-11-15 11:10:47.420901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.619 [2024-11-15 11:10:47.724094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.528 11:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.528 11:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:12.528 11:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59892 00:07:12.528 11:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59892 00:07:12.528 11:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59892 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59892 ']' 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59892 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59892 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:13.909 killing process with pid 59892 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59892' 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59892 00:07:13.909 11:10:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59892 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59914 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59914 ']' 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59914 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59914 00:07:19.178 killing process with pid 59914 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59914' 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59914 00:07:19.178 11:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59914 00:07:21.733 00:07:21.733 real 0m13.199s 00:07:21.733 user 0m13.260s 00:07:21.733 sys 0m1.919s 00:07:21.733 11:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.733 ************************************ 00:07:21.733 END TEST non_locking_app_on_locked_coremask 00:07:21.733 ************************************ 00:07:21.733 11:10:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.733 11:10:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:21.733 11:10:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.733 11:10:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.733 11:10:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.733 ************************************ 00:07:21.733 START TEST locking_app_on_unlocked_coremask 00:07:21.733 ************************************ 00:07:21.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60077 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60077 /var/tmp/spdk.sock 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60077 ']' 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.733 11:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:21.733 [2024-11-15 11:10:58.945856] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:21.733 [2024-11-15 11:10:58.945998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60077 ] 00:07:21.733 [2024-11-15 11:10:59.126300] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.733 [2024-11-15 11:10:59.126360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.992 [2024-11-15 11:10:59.266451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60104 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60104 /var/tmp/spdk2.sock 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60104 ']' 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.927 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.928 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.928 11:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.186 [2024-11-15 11:11:00.378917] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:23.186 [2024-11-15 11:11:00.379235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60104 ] 00:07:23.186 [2024-11-15 11:11:00.565261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.445 [2024-11-15 11:11:00.829461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.978 11:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.978 11:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:25.978 11:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60104 00:07:25.978 11:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60104 00:07:25.978 11:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.546 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60077 00:07:26.546 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60077 ']' 00:07:26.546 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60077 00:07:26.546 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:26.546 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.546 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60077 00:07:26.805 killing process with pid 60077 00:07:26.806 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.806 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.806 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60077' 00:07:26.806 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60077 00:07:26.806 11:11:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60077 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60104 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60104 ']' 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60104 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60104 00:07:32.079 killing process with pid 60104 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60104' 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60104 00:07:32.079 11:11:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60104 00:07:35.370 00:07:35.370 real 0m13.286s 00:07:35.370 user 0m13.260s 00:07:35.370 sys 0m1.786s 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.370 ************************************ 00:07:35.370 END TEST locking_app_on_unlocked_coremask 00:07:35.370 ************************************ 00:07:35.370 11:11:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:35.370 11:11:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.370 11:11:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.370 11:11:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.370 ************************************ 00:07:35.370 START TEST locking_app_on_locked_coremask 00:07:35.370 ************************************ 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60263 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60263 /var/tmp/spdk.sock 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60263 ']' 00:07:35.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.370 11:11:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.370 [2024-11-15 11:11:12.290816] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:35.370 [2024-11-15 11:11:12.290945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60263 ] 00:07:35.370 [2024-11-15 11:11:12.473690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.370 [2024-11-15 11:11:12.601762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.307 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.307 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:36.307 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60285 00:07:36.307 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60285 /var/tmp/spdk2.sock 00:07:36.307 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:36.307 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:36.307 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60285 /var/tmp/spdk2.sock 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60285 /var/tmp/spdk2.sock 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60285 ']' 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.308 11:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.308 [2024-11-15 11:11:13.700635] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:36.308 [2024-11-15 11:11:13.700966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60285 ] 00:07:36.566 [2024-11-15 11:11:13.887771] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60263 has claimed it. 00:07:36.566 [2024-11-15 11:11:13.887850] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.133 ERROR: process (pid: 60285) is no longer running 00:07:37.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60285) - No such process 00:07:37.133 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.133 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:37.133 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:37.134 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.134 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.134 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.134 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60263 00:07:37.134 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60263 00:07:37.134 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60263 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60263 ']' 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60263 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60263 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.701 killing process with pid 60263 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60263' 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60263 00:07:37.701 11:11:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60263 00:07:40.233 ************************************ 00:07:40.233 END TEST locking_app_on_locked_coremask 00:07:40.233 ************************************ 00:07:40.233 00:07:40.233 real 0m5.356s 00:07:40.233 user 0m5.414s 00:07:40.233 sys 0m1.062s 00:07:40.233 11:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.233 11:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.233 11:11:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:40.233 11:11:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:40.233 11:11:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.233 11:11:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.233 ************************************ 00:07:40.233 START TEST locking_overlapped_coremask 00:07:40.233 ************************************ 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60354 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60354 /var/tmp/spdk.sock 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60354 ']' 00:07:40.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.233 11:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.491 [2024-11-15 11:11:17.719429] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:40.491 [2024-11-15 11:11:17.719549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60354 ] 00:07:40.749 [2024-11-15 11:11:17.902207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.749 [2024-11-15 11:11:18.037863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.749 [2024-11-15 11:11:18.038013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.749 [2024-11-15 11:11:18.038047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60378 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60378 /var/tmp/spdk2.sock 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60378 /var/tmp/spdk2.sock 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60378 /var/tmp/spdk2.sock 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60378 ']' 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.687 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.946 [2024-11-15 11:11:19.159996] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:41.946 [2024-11-15 11:11:19.160685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60378 ] 00:07:41.946 [2024-11-15 11:11:19.344549] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60354 has claimed it. 00:07:41.946 [2024-11-15 11:11:19.344627] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:42.515 ERROR: process (pid: 60378) is no longer running 00:07:42.515 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60378) - No such process 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60354 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60354 ']' 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60354 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60354 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60354' 00:07:42.515 killing process with pid 60354 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60354 00:07:42.515 11:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60354 00:07:45.805 00:07:45.805 real 0m4.877s 00:07:45.805 user 0m13.079s 00:07:45.805 sys 0m0.778s 00:07:45.805 11:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.805 11:11:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.805 ************************************ 00:07:45.805 END TEST locking_overlapped_coremask 00:07:45.805 ************************************ 00:07:45.805 11:11:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:45.805 11:11:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.805 11:11:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.805 11:11:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.805 ************************************ 00:07:45.805 START TEST locking_overlapped_coremask_via_rpc 00:07:45.805 ************************************ 00:07:45.805 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:45.805 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60446 00:07:45.805 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60446 /var/tmp/spdk.sock 00:07:45.806 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:45.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.806 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60446 ']' 00:07:45.806 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.806 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:45.806 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.806 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:45.806 11:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.806 [2024-11-15 11:11:22.684241] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:45.806 [2024-11-15 11:11:22.684388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60446 ] 00:07:45.806 [2024-11-15 11:11:22.866369] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.806 [2024-11-15 11:11:22.866422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.806 [2024-11-15 11:11:23.005369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.806 [2024-11-15 11:11:23.005572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.806 [2024-11-15 11:11:23.005618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60471 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60471 /var/tmp/spdk2.sock 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60471 ']' 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.743 11:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.743 [2024-11-15 11:11:24.112307] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:46.743 [2024-11-15 11:11:24.112725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60471 ] 00:07:47.002 [2024-11-15 11:11:24.321667] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.002 [2024-11-15 11:11:24.321721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.260 [2024-11-15 11:11:24.556827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.260 [2024-11-15 11:11:24.560732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.260 [2024-11-15 11:11:24.560770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.788 [2024-11-15 11:11:26.686811] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60446 has claimed it. 00:07:49.788 request: 00:07:49.788 { 00:07:49.788 "method": "framework_enable_cpumask_locks", 00:07:49.788 "req_id": 1 00:07:49.788 } 00:07:49.788 Got JSON-RPC error response 00:07:49.788 response: 00:07:49.788 { 00:07:49.788 "code": -32603, 00:07:49.788 "message": "Failed to claim CPU core: 2" 00:07:49.788 } 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60446 /var/tmp/spdk.sock 00:07:49.788 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60446 ']' 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60471 /var/tmp/spdk2.sock 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60471 ']' 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.789 11:11:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:49.789 ************************************ 00:07:49.789 END TEST locking_overlapped_coremask_via_rpc 00:07:49.789 ************************************ 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:49.789 00:07:49.789 real 0m4.589s 00:07:49.789 user 0m1.260s 00:07:49.789 sys 0m0.258s 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.789 11:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.048 11:11:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:50.048 11:11:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60446 ]] 00:07:50.048 11:11:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60446 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60446 ']' 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60446 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60446 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:50.048 killing process with pid 60446 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60446' 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60446 00:07:50.048 11:11:27 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60446 00:07:52.618 11:11:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60471 ]] 00:07:52.618 11:11:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60471 00:07:52.618 11:11:29 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60471 ']' 00:07:52.619 11:11:29 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60471 00:07:52.619 11:11:29 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:52.619 11:11:29 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.619 11:11:29 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60471 00:07:52.878 killing process with pid 60471 00:07:52.878 11:11:30 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:52.878 11:11:30 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:52.878 11:11:30 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60471' 00:07:52.878 11:11:30 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60471 00:07:52.878 11:11:30 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60471 00:07:55.411 11:11:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:55.411 Process with pid 60446 is not found 00:07:55.411 Process with pid 60471 is not found 00:07:55.411 11:11:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:55.411 11:11:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60446 ]] 00:07:55.411 11:11:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60446 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60446 ']' 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60446 00:07:55.411 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60446) - No such process 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60446 is not found' 00:07:55.411 11:11:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60471 ]] 00:07:55.411 11:11:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60471 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60471 ']' 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60471 00:07:55.411 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60471) - No such process 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60471 is not found' 00:07:55.411 11:11:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:55.411 00:07:55.411 real 0m56.895s 00:07:55.411 user 1m33.541s 00:07:55.411 sys 0m9.033s 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.411 11:11:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.411 ************************************ 00:07:55.411 END TEST cpu_locks 00:07:55.411 ************************************ 00:07:55.411 00:07:55.411 real 1m28.018s 00:07:55.411 user 2m32.260s 00:07:55.411 sys 0m14.039s 00:07:55.411 11:11:32 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.411 ************************************ 00:07:55.411 END TEST event 00:07:55.411 ************************************ 00:07:55.411 11:11:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.411 11:11:32 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:55.411 11:11:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:55.411 11:11:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.411 11:11:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.411 ************************************ 00:07:55.411 START TEST thread 00:07:55.411 ************************************ 00:07:55.411 11:11:32 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:55.670 * Looking for test storage... 00:07:55.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.670 11:11:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.670 11:11:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.670 11:11:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.670 11:11:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.670 11:11:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.670 11:11:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.670 11:11:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.670 11:11:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.670 11:11:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.670 11:11:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.670 11:11:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.670 11:11:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:55.670 11:11:32 thread -- scripts/common.sh@345 -- # : 1 00:07:55.670 11:11:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.670 11:11:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.670 11:11:32 thread -- scripts/common.sh@365 -- # decimal 1 00:07:55.670 11:11:32 thread -- scripts/common.sh@353 -- # local d=1 00:07:55.670 11:11:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.670 11:11:32 thread -- scripts/common.sh@355 -- # echo 1 00:07:55.670 11:11:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.670 11:11:32 thread -- scripts/common.sh@366 -- # decimal 2 00:07:55.670 11:11:32 thread -- scripts/common.sh@353 -- # local d=2 00:07:55.670 11:11:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.670 11:11:32 thread -- scripts/common.sh@355 -- # echo 2 00:07:55.670 11:11:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.670 11:11:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.670 11:11:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.670 11:11:32 thread -- scripts/common.sh@368 -- # return 0 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.670 --rc genhtml_branch_coverage=1 00:07:55.670 --rc genhtml_function_coverage=1 00:07:55.670 --rc genhtml_legend=1 00:07:55.670 --rc geninfo_all_blocks=1 00:07:55.670 --rc geninfo_unexecuted_blocks=1 00:07:55.670 00:07:55.670 ' 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.670 --rc genhtml_branch_coverage=1 00:07:55.670 --rc genhtml_function_coverage=1 00:07:55.670 --rc genhtml_legend=1 00:07:55.670 --rc geninfo_all_blocks=1 00:07:55.670 --rc geninfo_unexecuted_blocks=1 00:07:55.670 00:07:55.670 ' 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.670 --rc genhtml_branch_coverage=1 00:07:55.670 --rc genhtml_function_coverage=1 00:07:55.670 --rc genhtml_legend=1 00:07:55.670 --rc geninfo_all_blocks=1 00:07:55.670 --rc geninfo_unexecuted_blocks=1 00:07:55.670 00:07:55.670 ' 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.670 --rc genhtml_branch_coverage=1 00:07:55.670 --rc genhtml_function_coverage=1 00:07:55.670 --rc genhtml_legend=1 00:07:55.670 --rc geninfo_all_blocks=1 00:07:55.670 --rc geninfo_unexecuted_blocks=1 00:07:55.670 00:07:55.670 ' 00:07:55.670 11:11:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.670 11:11:32 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.670 ************************************ 00:07:55.670 START TEST thread_poller_perf 00:07:55.670 ************************************ 00:07:55.670 11:11:32 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.670 [2024-11-15 11:11:33.001517] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:55.670 [2024-11-15 11:11:33.001821] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60666 ] 00:07:55.929 [2024-11-15 11:11:33.191204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.188 [2024-11-15 11:11:33.353332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.188 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:57.578 [2024-11-15T11:11:34.979Z] ====================================== 00:07:57.578 [2024-11-15T11:11:34.979Z] busy:2503277306 (cyc) 00:07:57.578 [2024-11-15T11:11:34.979Z] total_run_count: 353000 00:07:57.578 [2024-11-15T11:11:34.979Z] tsc_hz: 2490000000 (cyc) 00:07:57.578 [2024-11-15T11:11:34.979Z] ====================================== 00:07:57.578 [2024-11-15T11:11:34.979Z] poller_cost: 7091 (cyc), 2847 (nsec) 00:07:57.578 00:07:57.578 real 0m1.681s 00:07:57.578 ************************************ 00:07:57.578 END TEST thread_poller_perf 00:07:57.578 ************************************ 00:07:57.578 user 0m1.429s 00:07:57.578 sys 0m0.141s 00:07:57.578 11:11:34 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.578 11:11:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.578 11:11:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.578 11:11:34 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:57.578 11:11:34 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.578 11:11:34 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.578 ************************************ 00:07:57.578 START TEST thread_poller_perf 00:07:57.578 ************************************ 00:07:57.578 11:11:34 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.578 [2024-11-15 11:11:34.763337] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:57.578 [2024-11-15 11:11:34.763465] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60708 ] 00:07:57.578 [2024-11-15 11:11:34.951702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.841 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:57.841 [2024-11-15 11:11:35.097370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.219 [2024-11-15T11:11:36.620Z] ====================================== 00:07:59.219 [2024-11-15T11:11:36.620Z] busy:2494658006 (cyc) 00:07:59.219 [2024-11-15T11:11:36.620Z] total_run_count: 4674000 00:07:59.219 [2024-11-15T11:11:36.620Z] tsc_hz: 2490000000 (cyc) 00:07:59.219 [2024-11-15T11:11:36.620Z] ====================================== 00:07:59.219 [2024-11-15T11:11:36.620Z] poller_cost: 533 (cyc), 214 (nsec) 00:07:59.219 ************************************ 00:07:59.219 END TEST thread_poller_perf 00:07:59.219 ************************************ 00:07:59.219 00:07:59.219 real 0m1.653s 00:07:59.219 user 0m1.400s 00:07:59.219 sys 0m0.145s 00:07:59.219 11:11:36 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.219 11:11:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 11:11:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:59.219 00:07:59.219 real 0m3.741s 00:07:59.219 user 0m2.997s 00:07:59.219 sys 0m0.533s 00:07:59.219 11:11:36 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.219 11:11:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 ************************************ 00:07:59.219 END TEST thread 00:07:59.219 ************************************ 00:07:59.219 11:11:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:59.219 11:11:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.219 11:11:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:59.219 11:11:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.219 11:11:36 -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 ************************************ 00:07:59.219 START TEST app_cmdline 00:07:59.219 ************************************ 00:07:59.219 11:11:36 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.478 * Looking for test storage... 00:07:59.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.478 11:11:36 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:59.478 11:11:36 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:59.478 11:11:36 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:59.478 11:11:36 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.479 11:11:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.479 --rc genhtml_branch_coverage=1 00:07:59.479 --rc genhtml_function_coverage=1 00:07:59.479 --rc genhtml_legend=1 00:07:59.479 --rc geninfo_all_blocks=1 00:07:59.479 --rc geninfo_unexecuted_blocks=1 00:07:59.479 00:07:59.479 ' 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.479 --rc genhtml_branch_coverage=1 00:07:59.479 --rc genhtml_function_coverage=1 00:07:59.479 --rc genhtml_legend=1 00:07:59.479 --rc geninfo_all_blocks=1 00:07:59.479 --rc geninfo_unexecuted_blocks=1 00:07:59.479 00:07:59.479 ' 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.479 --rc genhtml_branch_coverage=1 00:07:59.479 --rc genhtml_function_coverage=1 00:07:59.479 --rc genhtml_legend=1 00:07:59.479 --rc geninfo_all_blocks=1 00:07:59.479 --rc geninfo_unexecuted_blocks=1 00:07:59.479 00:07:59.479 ' 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.479 --rc genhtml_branch_coverage=1 00:07:59.479 --rc genhtml_function_coverage=1 00:07:59.479 --rc genhtml_legend=1 00:07:59.479 --rc geninfo_all_blocks=1 00:07:59.479 --rc geninfo_unexecuted_blocks=1 00:07:59.479 00:07:59.479 ' 00:07:59.479 11:11:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.479 11:11:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60797 00:07:59.479 11:11:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.479 11:11:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60797 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60797 ']' 00:07:59.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.479 11:11:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.479 [2024-11-15 11:11:36.858342] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:07:59.479 [2024-11-15 11:11:36.858650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60797 ] 00:07:59.738 [2024-11-15 11:11:37.026976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.997 [2024-11-15 11:11:37.166708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.932 11:11:38 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.932 11:11:38 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:00.932 11:11:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:01.208 { 00:08:01.208 "version": "SPDK v25.01-pre git sha1 57db986b9", 00:08:01.208 "fields": { 00:08:01.208 "major": 25, 00:08:01.208 "minor": 1, 00:08:01.208 "patch": 0, 00:08:01.208 "suffix": "-pre", 00:08:01.208 "commit": "57db986b9" 00:08:01.208 } 00:08:01.208 } 00:08:01.208 11:11:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:01.208 11:11:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:01.209 11:11:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:01.209 11:11:38 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.475 request: 00:08:01.475 { 00:08:01.475 "method": "env_dpdk_get_mem_stats", 00:08:01.475 "req_id": 1 00:08:01.475 } 00:08:01.475 Got JSON-RPC error response 00:08:01.475 response: 00:08:01.475 { 00:08:01.475 "code": -32601, 00:08:01.475 "message": "Method not found" 00:08:01.475 } 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.475 11:11:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60797 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60797 ']' 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60797 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60797 00:08:01.475 killing process with pid 60797 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60797' 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@971 -- # kill 60797 00:08:01.475 11:11:38 app_cmdline -- common/autotest_common.sh@976 -- # wait 60797 00:08:04.764 00:08:04.764 real 0m5.127s 00:08:04.764 user 0m5.196s 00:08:04.764 sys 0m0.839s 00:08:04.764 11:11:41 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.764 ************************************ 00:08:04.764 END TEST app_cmdline 00:08:04.764 ************************************ 00:08:04.764 11:11:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:04.764 11:11:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:04.764 11:11:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.764 11:11:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.764 11:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:04.764 ************************************ 00:08:04.764 START TEST version 00:08:04.764 ************************************ 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:04.764 * Looking for test storage... 00:08:04.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:04.764 11:11:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.764 11:11:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.764 11:11:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.764 11:11:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.764 11:11:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.764 11:11:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.764 11:11:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.764 11:11:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.764 11:11:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.764 11:11:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.764 11:11:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.764 11:11:41 version -- scripts/common.sh@344 -- # case "$op" in 00:08:04.764 11:11:41 version -- scripts/common.sh@345 -- # : 1 00:08:04.764 11:11:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.764 11:11:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.764 11:11:41 version -- scripts/common.sh@365 -- # decimal 1 00:08:04.764 11:11:41 version -- scripts/common.sh@353 -- # local d=1 00:08:04.764 11:11:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.764 11:11:41 version -- scripts/common.sh@355 -- # echo 1 00:08:04.764 11:11:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.764 11:11:41 version -- scripts/common.sh@366 -- # decimal 2 00:08:04.764 11:11:41 version -- scripts/common.sh@353 -- # local d=2 00:08:04.764 11:11:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.764 11:11:41 version -- scripts/common.sh@355 -- # echo 2 00:08:04.764 11:11:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.764 11:11:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.764 11:11:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.764 11:11:41 version -- scripts/common.sh@368 -- # return 0 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:04.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.764 --rc genhtml_branch_coverage=1 00:08:04.764 --rc genhtml_function_coverage=1 00:08:04.764 --rc genhtml_legend=1 00:08:04.764 --rc geninfo_all_blocks=1 00:08:04.764 --rc geninfo_unexecuted_blocks=1 00:08:04.764 00:08:04.764 ' 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:04.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.764 --rc genhtml_branch_coverage=1 00:08:04.764 --rc genhtml_function_coverage=1 00:08:04.764 --rc genhtml_legend=1 00:08:04.764 --rc geninfo_all_blocks=1 00:08:04.764 --rc geninfo_unexecuted_blocks=1 00:08:04.764 00:08:04.764 ' 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:04.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.764 --rc genhtml_branch_coverage=1 00:08:04.764 --rc genhtml_function_coverage=1 00:08:04.764 --rc genhtml_legend=1 00:08:04.764 --rc geninfo_all_blocks=1 00:08:04.764 --rc geninfo_unexecuted_blocks=1 00:08:04.764 00:08:04.764 ' 00:08:04.764 11:11:41 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:04.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.764 --rc genhtml_branch_coverage=1 00:08:04.764 --rc genhtml_function_coverage=1 00:08:04.764 --rc genhtml_legend=1 00:08:04.764 --rc geninfo_all_blocks=1 00:08:04.764 --rc geninfo_unexecuted_blocks=1 00:08:04.764 00:08:04.764 ' 00:08:04.764 11:11:41 version -- app/version.sh@17 -- # get_header_version major 00:08:04.764 11:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # cut -f2 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.764 11:11:41 version -- app/version.sh@17 -- # major=25 00:08:04.764 11:11:41 version -- app/version.sh@18 -- # get_header_version minor 00:08:04.764 11:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # cut -f2 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.764 11:11:41 version -- app/version.sh@18 -- # minor=1 00:08:04.764 11:11:41 version -- app/version.sh@19 -- # get_header_version patch 00:08:04.764 11:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # cut -f2 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.764 11:11:41 version -- app/version.sh@19 -- # patch=0 00:08:04.764 11:11:41 version -- app/version.sh@20 -- # get_header_version suffix 00:08:04.764 11:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # cut -f2 00:08:04.764 11:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.764 11:11:41 version -- app/version.sh@20 -- # suffix=-pre 00:08:04.764 11:11:41 version -- app/version.sh@22 -- # version=25.1 00:08:04.764 11:11:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:04.764 11:11:41 version -- app/version.sh@28 -- # version=25.1rc0 00:08:04.764 11:11:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:04.764 11:11:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:04.764 11:11:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:04.764 11:11:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:04.764 00:08:04.764 real 0m0.341s 00:08:04.764 user 0m0.184s 00:08:04.764 sys 0m0.208s 00:08:04.764 ************************************ 00:08:04.764 END TEST version 00:08:04.764 ************************************ 00:08:04.764 11:11:42 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.764 11:11:42 version -- common/autotest_common.sh@10 -- # set +x 00:08:04.764 11:11:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:04.764 11:11:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:04.764 11:11:42 -- spdk/autotest.sh@194 -- # uname -s 00:08:04.764 11:11:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:04.764 11:11:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:04.764 11:11:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:04.764 11:11:42 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:04.764 11:11:42 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:04.764 11:11:42 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:04.764 11:11:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.764 11:11:42 -- common/autotest_common.sh@10 -- # set +x 00:08:04.764 ************************************ 00:08:04.764 START TEST blockdev_nvme 00:08:04.764 ************************************ 00:08:04.764 11:11:42 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:05.025 * Looking for test storage... 00:08:05.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.025 11:11:42 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.025 --rc genhtml_branch_coverage=1 00:08:05.025 --rc genhtml_function_coverage=1 00:08:05.025 --rc genhtml_legend=1 00:08:05.025 --rc geninfo_all_blocks=1 00:08:05.025 --rc geninfo_unexecuted_blocks=1 00:08:05.025 00:08:05.025 ' 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.025 --rc genhtml_branch_coverage=1 00:08:05.025 --rc genhtml_function_coverage=1 00:08:05.025 --rc genhtml_legend=1 00:08:05.025 --rc geninfo_all_blocks=1 00:08:05.025 --rc geninfo_unexecuted_blocks=1 00:08:05.025 00:08:05.025 ' 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.025 --rc genhtml_branch_coverage=1 00:08:05.025 --rc genhtml_function_coverage=1 00:08:05.025 --rc genhtml_legend=1 00:08:05.025 --rc geninfo_all_blocks=1 00:08:05.025 --rc geninfo_unexecuted_blocks=1 00:08:05.025 00:08:05.025 ' 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.025 --rc genhtml_branch_coverage=1 00:08:05.025 --rc genhtml_function_coverage=1 00:08:05.025 --rc genhtml_legend=1 00:08:05.025 --rc geninfo_all_blocks=1 00:08:05.025 --rc geninfo_unexecuted_blocks=1 00:08:05.025 00:08:05.025 ' 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:05.025 11:11:42 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60991 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:05.025 11:11:42 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60991 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 60991 ']' 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:05.025 11:11:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.285 [2024-11-15 11:11:42.496402] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:05.285 [2024-11-15 11:11:42.496545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:08:05.543 [2024-11-15 11:11:42.685111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.543 [2024-11-15 11:11:42.841057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.476 11:11:43 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.476 11:11:43 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:08:06.476 11:11:43 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:06.476 11:11:43 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:08:06.476 11:11:43 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:06.476 11:11:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:06.476 11:11:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.736 11:11:43 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:06.736 11:11:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.736 11:11:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:06.994 11:11:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.994 11:11:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.253 11:11:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.253 11:11:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:07.253 11:11:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:07.254 11:11:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "81e3fc82-b39a-415b-92a9-ef07a9fd30bc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "81e3fc82-b39a-415b-92a9-ef07a9fd30bc",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d8bb3d57-0e52-414d-9eff-4ec579fd8975"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d8bb3d57-0e52-414d-9eff-4ec579fd8975",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6887db40-58fd-4037-8bb9-edb5a23c05ff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6887db40-58fd-4037-8bb9-edb5a23c05ff",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "36ba82dc-9d87-4e51-9f4f-95a9c88ed373"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "36ba82dc-9d87-4e51-9f4f-95a9c88ed373",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "99dd0f0a-6f12-4f0c-950c-db2f660441e4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "99dd0f0a-6f12-4f0c-950c-db2f660441e4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "54a2fb0f-f98a-4f8e-aefb-881a99c691a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "54a2fb0f-f98a-4f8e-aefb-881a99c691a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:07.254 11:11:44 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:07.254 11:11:44 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:07.254 11:11:44 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:07.254 11:11:44 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60991 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 60991 ']' 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 60991 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60991 00:08:07.254 killing process with pid 60991 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60991' 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 60991 00:08:07.254 11:11:44 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 60991 00:08:09.870 11:11:47 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:09.870 11:11:47 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:09.870 11:11:47 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:08:09.870 11:11:47 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.870 11:11:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.870 ************************************ 00:08:09.870 START TEST bdev_hello_world 00:08:09.870 ************************************ 00:08:09.870 11:11:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:10.128 [2024-11-15 11:11:47.309429] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:10.128 [2024-11-15 11:11:47.309608] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61098 ] 00:08:10.128 [2024-11-15 11:11:47.491451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.387 [2024-11-15 11:11:47.659333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.323 [2024-11-15 11:11:48.432802] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:11.323 [2024-11-15 11:11:48.432872] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:11.323 [2024-11-15 11:11:48.432897] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:11.323 [2024-11-15 11:11:48.436191] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:11.323 [2024-11-15 11:11:48.483380] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:11.323 [2024-11-15 11:11:48.483578] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:11.323 [2024-11-15 11:11:48.483841] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:11.323 00:08:11.323 [2024-11-15 11:11:48.483877] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:12.699 00:08:12.699 real 0m2.604s 00:08:12.699 user 0m2.143s 00:08:12.699 sys 0m0.346s 00:08:12.699 ************************************ 00:08:12.699 END TEST bdev_hello_world 00:08:12.699 ************************************ 00:08:12.699 11:11:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.699 11:11:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:12.699 11:11:49 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:12.699 11:11:49 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:12.699 11:11:49 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.699 11:11:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.699 ************************************ 00:08:12.699 START TEST bdev_bounds 00:08:12.699 ************************************ 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:08:12.699 Process bdevio pid: 61146 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61146 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61146' 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61146 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61146 ']' 00:08:12.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.699 11:11:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:12.699 [2024-11-15 11:11:49.983028] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:12.699 [2024-11-15 11:11:49.983434] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61146 ] 00:08:12.960 [2024-11-15 11:11:50.174420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.960 [2024-11-15 11:11:50.315197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.960 [2024-11-15 11:11:50.315358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.960 [2024-11-15 11:11:50.315402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.896 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.896 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:08:13.896 11:11:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:13.896 I/O targets: 00:08:13.896 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:13.896 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:13.896 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:13.896 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:13.896 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:13.896 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:13.896 00:08:13.896 00:08:13.896 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.896 http://cunit.sourceforge.net/ 00:08:13.896 00:08:13.896 00:08:13.896 Suite: bdevio tests on: Nvme3n1 00:08:13.896 Test: blockdev write read block ...passed 00:08:13.896 Test: blockdev write zeroes read block ...passed 00:08:13.896 Test: blockdev write zeroes read no split ...passed 00:08:13.896 Test: blockdev write zeroes read split ...passed 00:08:13.896 Test: blockdev write zeroes read split partial ...passed 00:08:13.896 Test: blockdev reset ...[2024-11-15 11:11:51.275917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:13.896 [2024-11-15 11:11:51.280395] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:13.896 Test: blockdev write read 8 blocks ...uccessful. 00:08:13.896 passed 00:08:13.896 Test: blockdev write read size > 128k ...passed 00:08:13.896 Test: blockdev write read invalid size ...passed 00:08:13.896 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:13.896 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:13.896 Test: blockdev write read max offset ...passed 00:08:13.896 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:13.896 Test: blockdev writev readv 8 blocks ...passed 00:08:13.896 Test: blockdev writev readv 30 x 1block ...passed 00:08:13.896 Test: blockdev writev readv block ...passed 00:08:13.896 Test: blockdev writev readv size > 128k ...passed 00:08:13.896 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:13.896 Test: blockdev comparev and writev ...[2024-11-15 11:11:51.290187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be00a000 len:0x1000 00:08:13.896 passed 00:08:13.896 Test: blockdev nvme passthru rw ...[2024-11-15 11:11:51.290476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:13.896 passed 00:08:13.896 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:11:51.291588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:13.896 passed 00:08:13.896 Test: blockdev nvme admin passthru ...[2024-11-15 11:11:51.291833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:14.155 passed 00:08:14.155 Test: blockdev copy ...passed 00:08:14.155 Suite: bdevio tests on: Nvme2n3 00:08:14.155 Test: blockdev write read block ...passed 00:08:14.155 Test: blockdev write zeroes read block ...passed 00:08:14.155 Test: blockdev write zeroes read no split ...passed 00:08:14.155 Test: blockdev write zeroes read split ...passed 00:08:14.155 Test: blockdev write zeroes read split partial ...passed 00:08:14.155 Test: blockdev reset ...[2024-11-15 11:11:51.383877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:14.155 [2024-11-15 11:11:51.388612] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:08:14.155 00:08:14.155 Test: blockdev write read 8 blocks ...passed 00:08:14.155 Test: blockdev write read size > 128k ...passed 00:08:14.155 Test: blockdev write read invalid size ...passed 00:08:14.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:14.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:14.155 Test: blockdev write read max offset ...passed 00:08:14.155 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:14.155 Test: blockdev writev readv 8 blocks ...passed 00:08:14.155 Test: blockdev writev readv 30 x 1block ...passed 00:08:14.155 Test: blockdev writev readv block ...passed 00:08:14.155 Test: blockdev writev readv size > 128k ...passed 00:08:14.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:14.155 Test: blockdev comparev and writev ...[2024-11-15 11:11:51.398684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:14.155 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2a1206000 len:0x1000 00:08:14.155 [2024-11-15 11:11:51.398959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:14.155 passed 00:08:14.155 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:11:51.399905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:14.155 [2024-11-15 11:11:51.400050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:14.155 passed 00:08:14.155 Test: blockdev nvme admin passthru ...passed 00:08:14.155 Test: blockdev copy ...passed 00:08:14.155 Suite: bdevio tests on: Nvme2n2 00:08:14.155 Test: blockdev write read block ...passed 00:08:14.155 Test: blockdev write zeroes read block ...passed 00:08:14.155 Test: blockdev write zeroes read no split ...passed 00:08:14.155 Test: blockdev write zeroes read split ...passed 00:08:14.155 Test: blockdev write zeroes read split partial ...passed 00:08:14.155 Test: blockdev reset ...[2024-11-15 11:11:51.479359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:14.155 passed 00:08:14.155 Test: blockdev write read 8 blocks ...[2024-11-15 11:11:51.483627] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:14.155 passed 00:08:14.155 Test: blockdev write read size > 128k ...passed 00:08:14.155 Test: blockdev write read invalid size ...passed 00:08:14.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:14.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:14.155 Test: blockdev write read max offset ...passed 00:08:14.155 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:14.155 Test: blockdev writev readv 8 blocks ...passed 00:08:14.155 Test: blockdev writev readv 30 x 1block ...passed 00:08:14.155 Test: blockdev writev readv block ...passed 00:08:14.155 Test: blockdev writev readv size > 128k ...passed 00:08:14.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:14.155 Test: blockdev comparev and writev ...[2024-11-15 11:11:51.499928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:08:14.155 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ce03c000 len:0x1000 00:08:14.155 [2024-11-15 11:11:51.500200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:14.155 passed 00:08:14.155 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:11:51.501302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:14.155 passed[2024-11-15 11:11:51.501442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:14.155 00:08:14.155 Test: blockdev nvme admin passthru ...passed 00:08:14.155 Test: blockdev copy ...passed 00:08:14.155 Suite: bdevio tests on: Nvme2n1 00:08:14.155 Test: blockdev write read block ...passed 00:08:14.155 Test: blockdev write zeroes read block ...passed 00:08:14.155 Test: blockdev write zeroes read no split ...passed 00:08:14.414 Test: blockdev write zeroes read split ...passed 00:08:14.414 Test: blockdev write zeroes read split partial ...passed 00:08:14.414 Test: blockdev reset ...[2024-11-15 11:11:51.596961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:14.414 [2024-11-15 11:11:51.601376] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:14.414 passed 00:08:14.414 Test: blockdev write read 8 blocks ...passed 00:08:14.414 Test: blockdev write read size > 128k ...passed 00:08:14.414 Test: blockdev write read invalid size ...passed 00:08:14.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:14.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:14.414 Test: blockdev write read max offset ...passed 00:08:14.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:14.415 Test: blockdev writev readv 8 blocks ...passed 00:08:14.415 Test: blockdev writev readv 30 x 1block ...passed 00:08:14.415 Test: blockdev writev readv block ...passed 00:08:14.415 Test: blockdev writev readv size > 128k ...passed 00:08:14.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:14.415 Test: blockdev comparev and writev ...[2024-11-15 11:11:51.610925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce038000 len:0x1000 00:08:14.415 [2024-11-15 11:11:51.611189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:14.415 passed 00:08:14.415 Test: blockdev nvme passthru rw ...passed 00:08:14.415 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:11:51.612252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:14.415 [2024-11-15 11:11:51.612383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:14.415 passed 00:08:14.415 Test: blockdev nvme admin passthru ...passed 00:08:14.415 Test: blockdev copy ...passed 00:08:14.415 Suite: bdevio tests on: Nvme1n1 00:08:14.415 Test: blockdev write read block ...passed 00:08:14.415 Test: blockdev write zeroes read block ...passed 00:08:14.415 Test: blockdev write zeroes read no split ...passed 00:08:14.415 Test: blockdev write zeroes read split ...passed 00:08:14.415 Test: blockdev write zeroes read split partial ...passed 00:08:14.415 Test: blockdev reset ...[2024-11-15 11:11:51.694987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:14.415 [2024-11-15 11:11:51.698786] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:14.415 passed 00:08:14.415 Test: blockdev write read 8 blocks ...passed 00:08:14.415 Test: blockdev write read size > 128k ...passed 00:08:14.415 Test: blockdev write read invalid size ...passed 00:08:14.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:14.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:14.415 Test: blockdev write read max offset ...passed 00:08:14.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:14.415 Test: blockdev writev readv 8 blocks ...passed 00:08:14.415 Test: blockdev writev readv 30 x 1block ...passed 00:08:14.415 Test: blockdev writev readv block ...passed 00:08:14.415 Test: blockdev writev readv size > 128k ...passed 00:08:14.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:14.415 Test: blockdev comparev and writev ...[2024-11-15 11:11:51.709665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce034000 len:0x1000 00:08:14.415 [2024-11-15 11:11:51.709960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:14.415 passed 00:08:14.415 Test: blockdev nvme passthru rw ...passed 00:08:14.415 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:11:51.711434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:14.415 [2024-11-15 11:11:51.711773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:14.415 passed 00:08:14.415 Test: blockdev nvme admin passthru ...passed 00:08:14.415 Test: blockdev copy ...passed 00:08:14.415 Suite: bdevio tests on: Nvme0n1 00:08:14.415 Test: blockdev write read block ...passed 00:08:14.415 Test: blockdev write zeroes read block ...passed 00:08:14.415 Test: blockdev write zeroes read no split ...passed 00:08:14.415 Test: blockdev write zeroes read split ...passed 00:08:14.415 Test: blockdev write zeroes read split partial ...passed 00:08:14.415 Test: blockdev reset ...[2024-11-15 11:11:51.788489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:14.415 passed 00:08:14.415 Test: blockdev write read 8 blocks ...[2024-11-15 11:11:51.792759] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:14.415 passed 00:08:14.415 Test: blockdev write read size > 128k ...passed 00:08:14.415 Test: blockdev write read invalid size ...passed 00:08:14.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:14.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:14.415 Test: blockdev write read max offset ...passed 00:08:14.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:14.415 Test: blockdev writev readv 8 blocks ...passed 00:08:14.415 Test: blockdev writev readv 30 x 1block ...passed 00:08:14.415 Test: blockdev writev readv block ...passed 00:08:14.415 Test: blockdev writev readv size > 128k ...passed 00:08:14.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:14.415 Test: blockdev comparev and writev ...passed 00:08:14.415 Test: blockdev nvme passthru rw ...[2024-11-15 11:11:51.801911] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:14.415 separate metadata which is not supported yet. 00:08:14.415 passed 00:08:14.415 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:11:51.802753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:14.415 passed 00:08:14.415 Test: blockdev nvme admin passthru ...[2024-11-15 11:11:51.803000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:14.415 passed 00:08:14.415 Test: blockdev copy ...passed 00:08:14.415 00:08:14.415 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.415 suites 6 6 n/a 0 0 00:08:14.415 tests 138 138 138 0 0 00:08:14.415 asserts 893 893 893 0 n/a 00:08:14.415 00:08:14.415 Elapsed time = 1.637 seconds 00:08:14.675 0 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61146 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61146 ']' 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61146 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61146 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61146' 00:08:14.675 killing process with pid 61146 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61146 00:08:14.675 11:11:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61146 00:08:16.052 11:11:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:16.052 00:08:16.052 real 0m3.175s 00:08:16.052 user 0m8.069s 00:08:16.052 sys 0m0.554s 00:08:16.052 11:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.052 11:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:16.052 ************************************ 00:08:16.052 END TEST bdev_bounds 00:08:16.052 ************************************ 00:08:16.052 11:11:53 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:16.052 11:11:53 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:16.052 11:11:53 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.052 11:11:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.052 ************************************ 00:08:16.052 START TEST bdev_nbd 00:08:16.052 ************************************ 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61211 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61211 /var/tmp/spdk-nbd.sock 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61211 ']' 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:16.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.052 11:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:16.053 11:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.053 11:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:16.053 [2024-11-15 11:11:53.253535] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:16.053 [2024-11-15 11:11:53.253676] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.053 [2024-11-15 11:11:53.439044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.311 [2024-11-15 11:11:53.603266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:17.247 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.506 1+0 records in 00:08:17.506 1+0 records out 00:08:17.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664311 s, 6.2 MB/s 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:17.506 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.765 1+0 records in 00:08:17.765 1+0 records out 00:08:17.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683142 s, 6.0 MB/s 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:17.765 11:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.024 1+0 records in 00:08:18.024 1+0 records out 00:08:18.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529364 s, 7.7 MB/s 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.024 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.283 1+0 records in 00:08:18.283 1+0 records out 00:08:18.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802499 s, 5.1 MB/s 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.283 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.542 1+0 records in 00:08:18.542 1+0 records out 00:08:18.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724942 s, 5.7 MB/s 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.542 11:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.800 1+0 records in 00:08:18.800 1+0 records out 00:08:18.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644968 s, 6.4 MB/s 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:18.800 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.058 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd0", 00:08:19.058 "bdev_name": "Nvme0n1" 00:08:19.058 }, 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd1", 00:08:19.058 "bdev_name": "Nvme1n1" 00:08:19.058 }, 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd2", 00:08:19.058 "bdev_name": "Nvme2n1" 00:08:19.058 }, 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd3", 00:08:19.058 "bdev_name": "Nvme2n2" 00:08:19.058 }, 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd4", 00:08:19.058 "bdev_name": "Nvme2n3" 00:08:19.058 }, 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd5", 00:08:19.058 "bdev_name": "Nvme3n1" 00:08:19.058 } 00:08:19.058 ]' 00:08:19.058 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:19.058 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd0", 00:08:19.058 "bdev_name": "Nvme0n1" 00:08:19.058 }, 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd1", 00:08:19.058 "bdev_name": "Nvme1n1" 00:08:19.058 }, 00:08:19.058 { 00:08:19.058 "nbd_device": "/dev/nbd2", 00:08:19.059 "bdev_name": "Nvme2n1" 00:08:19.059 }, 00:08:19.059 { 00:08:19.059 "nbd_device": "/dev/nbd3", 00:08:19.059 "bdev_name": "Nvme2n2" 00:08:19.059 }, 00:08:19.059 { 00:08:19.059 "nbd_device": "/dev/nbd4", 00:08:19.059 "bdev_name": "Nvme2n3" 00:08:19.059 }, 00:08:19.059 { 00:08:19.059 "nbd_device": "/dev/nbd5", 00:08:19.059 "bdev_name": "Nvme3n1" 00:08:19.059 } 00:08:19.059 ]' 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.059 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.317 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.576 11:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.835 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.094 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.354 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:20.612 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:20.612 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:20.612 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:20.612 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.612 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.613 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:20.613 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.613 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.613 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:20.613 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.613 11:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:20.871 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:21.130 /dev/nbd0 00:08:21.130 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:21.130 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:21.130 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:21.130 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:21.130 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.131 1+0 records in 00:08:21.131 1+0 records out 00:08:21.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051123 s, 8.0 MB/s 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:21.131 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:21.390 /dev/nbd1 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.390 1+0 records in 00:08:21.390 1+0 records out 00:08:21.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489328 s, 8.4 MB/s 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:21.390 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:21.650 /dev/nbd10 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.650 1+0 records in 00:08:21.650 1+0 records out 00:08:21.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740223 s, 5.5 MB/s 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:21.650 11:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:21.909 /dev/nbd11 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.909 1+0 records in 00:08:21.909 1+0 records out 00:08:21.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582431 s, 7.0 MB/s 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:21.909 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:21.910 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.910 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:21.910 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:22.169 /dev/nbd12 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.169 1+0 records in 00:08:22.169 1+0 records out 00:08:22.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642448 s, 6.4 MB/s 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:22.169 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:22.428 /dev/nbd13 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.428 1+0 records in 00:08:22.428 1+0 records out 00:08:22.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755271 s, 5.4 MB/s 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.428 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:22.687 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:22.687 { 00:08:22.687 "nbd_device": "/dev/nbd0", 00:08:22.687 "bdev_name": "Nvme0n1" 00:08:22.687 }, 00:08:22.687 { 00:08:22.687 "nbd_device": "/dev/nbd1", 00:08:22.687 "bdev_name": "Nvme1n1" 00:08:22.687 }, 00:08:22.687 { 00:08:22.687 "nbd_device": "/dev/nbd10", 00:08:22.688 "bdev_name": "Nvme2n1" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd11", 00:08:22.688 "bdev_name": "Nvme2n2" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd12", 00:08:22.688 "bdev_name": "Nvme2n3" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd13", 00:08:22.688 "bdev_name": "Nvme3n1" 00:08:22.688 } 00:08:22.688 ]' 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd0", 00:08:22.688 "bdev_name": "Nvme0n1" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd1", 00:08:22.688 "bdev_name": "Nvme1n1" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd10", 00:08:22.688 "bdev_name": "Nvme2n1" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd11", 00:08:22.688 "bdev_name": "Nvme2n2" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd12", 00:08:22.688 "bdev_name": "Nvme2n3" 00:08:22.688 }, 00:08:22.688 { 00:08:22.688 "nbd_device": "/dev/nbd13", 00:08:22.688 "bdev_name": "Nvme3n1" 00:08:22.688 } 00:08:22.688 ]' 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:22.688 /dev/nbd1 00:08:22.688 /dev/nbd10 00:08:22.688 /dev/nbd11 00:08:22.688 /dev/nbd12 00:08:22.688 /dev/nbd13' 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:22.688 /dev/nbd1 00:08:22.688 /dev/nbd10 00:08:22.688 /dev/nbd11 00:08:22.688 /dev/nbd12 00:08:22.688 /dev/nbd13' 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:22.688 256+0 records in 00:08:22.688 256+0 records out 00:08:22.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012685 s, 82.7 MB/s 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:22.688 11:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:22.947 256+0 records in 00:08:22.947 256+0 records out 00:08:22.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169593 s, 6.2 MB/s 00:08:22.947 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:22.947 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:22.947 256+0 records in 00:08:22.947 256+0 records out 00:08:22.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125928 s, 8.3 MB/s 00:08:22.947 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:22.947 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:23.206 256+0 records in 00:08:23.206 256+0 records out 00:08:23.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131684 s, 8.0 MB/s 00:08:23.206 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.206 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:23.206 256+0 records in 00:08:23.206 256+0 records out 00:08:23.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128716 s, 8.1 MB/s 00:08:23.206 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.206 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:23.465 256+0 records in 00:08:23.465 256+0 records out 00:08:23.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131082 s, 8.0 MB/s 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:23.465 256+0 records in 00:08:23.465 256+0 records out 00:08:23.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128878 s, 8.1 MB/s 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.465 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.724 11:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:23.724 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:23.724 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:23.724 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:23.724 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.724 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.725 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:23.725 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.725 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.725 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.725 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.983 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:24.241 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:24.241 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.242 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.500 11:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.759 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.017 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:25.276 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:25.535 malloc_lvol_verify 00:08:25.536 11:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:25.794 9d877e4c-4511-4089-9865-f3780ee7b001 00:08:25.794 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:26.055 434881c4-a7b6-443e-8972-dfd28dea6d0c 00:08:26.055 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:26.313 /dev/nbd0 00:08:26.313 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:26.313 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:26.313 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:26.313 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:26.313 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:26.313 mke2fs 1.47.0 (5-Feb-2023) 00:08:26.313 Discarding device blocks: 0/4096 done 00:08:26.313 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:26.313 00:08:26.313 Allocating group tables: 0/1 done 00:08:26.313 Writing inode tables: 0/1 done 00:08:26.313 Creating journal (1024 blocks): done 00:08:26.313 Writing superblocks and filesystem accounting information: 0/1 done 00:08:26.313 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.314 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61211 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61211 ']' 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61211 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61211 00:08:26.574 killing process with pid 61211 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61211' 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61211 00:08:26.574 11:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61211 00:08:27.963 11:12:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:27.963 00:08:27.963 real 0m11.885s 00:08:27.963 user 0m15.169s 00:08:27.963 sys 0m5.024s 00:08:27.963 11:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.963 11:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:27.963 ************************************ 00:08:27.963 END TEST bdev_nbd 00:08:27.963 ************************************ 00:08:27.963 11:12:05 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:27.963 11:12:05 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:08:27.963 skipping fio tests on NVMe due to multi-ns failures. 00:08:27.963 11:12:05 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:27.963 11:12:05 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:27.963 11:12:05 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:27.963 11:12:05 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:27.963 11:12:05 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.963 11:12:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.963 ************************************ 00:08:27.963 START TEST bdev_verify 00:08:27.963 ************************************ 00:08:27.963 11:12:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:27.963 [2024-11-15 11:12:05.207762] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:27.963 [2024-11-15 11:12:05.207905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61601 ] 00:08:28.222 [2024-11-15 11:12:05.395889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.222 [2024-11-15 11:12:05.547045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.222 [2024-11-15 11:12:05.547060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.158 Running I/O for 5 seconds... 00:08:31.100 17600.00 IOPS, 68.75 MiB/s [2024-11-15T11:12:09.878Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-15T11:12:10.813Z] 19648.00 IOPS, 76.75 MiB/s [2024-11-15T11:12:11.748Z] 20176.00 IOPS, 78.81 MiB/s [2024-11-15T11:12:11.748Z] 20582.40 IOPS, 80.40 MiB/s 00:08:34.347 Latency(us) 00:08:34.347 [2024-11-15T11:12:11.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.347 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x0 length 0xbd0bd 00:08:34.347 Nvme0n1 : 5.06 1769.41 6.91 0.00 0.00 72201.41 16107.64 69483.95 00:08:34.347 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:34.347 Nvme0n1 : 5.06 1643.43 6.42 0.00 0.00 77696.52 15581.25 79169.59 00:08:34.347 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x0 length 0xa0000 00:08:34.347 Nvme1n1 : 5.07 1768.67 6.91 0.00 0.00 72134.17 14844.30 64009.46 00:08:34.347 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0xa0000 length 0xa0000 00:08:34.347 Nvme1n1 : 5.06 1642.97 6.42 0.00 0.00 77558.40 15686.53 72431.76 00:08:34.347 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x0 length 0x80000 00:08:34.347 Nvme2n1 : 5.07 1768.21 6.91 0.00 0.00 71979.18 13896.79 57271.62 00:08:34.347 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x80000 length 0x80000 00:08:34.347 Nvme2n1 : 5.07 1641.81 6.41 0.00 0.00 77440.97 16528.76 74537.33 00:08:34.347 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x0 length 0x80000 00:08:34.347 Nvme2n2 : 5.07 1767.03 6.90 0.00 0.00 71914.07 15265.41 54744.93 00:08:34.347 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x80000 length 0x80000 00:08:34.347 Nvme2n2 : 5.07 1641.23 6.41 0.00 0.00 77263.92 16107.64 78327.36 00:08:34.347 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x0 length 0x80000 00:08:34.347 Nvme2n3 : 5.07 1766.18 6.90 0.00 0.00 71853.93 16212.92 56429.39 00:08:34.347 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x80000 length 0x80000 00:08:34.347 Nvme2n3 : 5.07 1640.77 6.41 0.00 0.00 77159.80 16423.48 80854.05 00:08:34.347 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x0 length 0x20000 00:08:34.347 Nvme3n1 : 5.08 1765.40 6.90 0.00 0.00 71783.51 16107.64 58956.08 00:08:34.347 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:34.347 Verification LBA range: start 0x20000 length 0x20000 00:08:34.347 Nvme3n1 : 5.07 1639.99 6.41 0.00 0.00 77120.63 16739.32 82959.63 00:08:34.347 [2024-11-15T11:12:11.748Z] =================================================================================================================== 00:08:34.347 [2024-11-15T11:12:11.748Z] Total : 20455.11 79.90 0.00 0.00 74575.62 13896.79 82959.63 00:08:35.724 00:08:35.724 real 0m7.929s 00:08:35.724 user 0m14.495s 00:08:35.724 sys 0m0.417s 00:08:35.724 11:12:13 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.724 11:12:13 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:35.724 ************************************ 00:08:35.724 END TEST bdev_verify 00:08:35.724 ************************************ 00:08:35.724 11:12:13 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:35.724 11:12:13 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:35.724 11:12:13 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.724 11:12:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.724 ************************************ 00:08:35.724 START TEST bdev_verify_big_io 00:08:35.724 ************************************ 00:08:35.724 11:12:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:35.983 [2024-11-15 11:12:13.213336] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:35.983 [2024-11-15 11:12:13.213492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61710 ] 00:08:36.242 [2024-11-15 11:12:13.403911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.242 [2024-11-15 11:12:13.553740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.242 [2024-11-15 11:12:13.553771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.178 Running I/O for 5 seconds... 00:08:41.846 2119.00 IOPS, 132.44 MiB/s [2024-11-15T11:12:20.623Z] 3014.00 IOPS, 188.38 MiB/s [2024-11-15T11:12:20.623Z] 3263.00 IOPS, 203.94 MiB/s 00:08:43.222 Latency(us) 00:08:43.222 [2024-11-15T11:12:20.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.222 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x0 length 0xbd0b 00:08:43.222 Nvme0n1 : 5.61 154.78 9.67 0.00 0.00 800330.55 25056.33 852336.48 00:08:43.222 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:43.222 Nvme0n1 : 5.56 157.96 9.87 0.00 0.00 785606.24 21476.86 859074.31 00:08:43.222 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x0 length 0xa000 00:08:43.222 Nvme1n1 : 5.56 155.31 9.71 0.00 0.00 782339.92 72431.76 784958.10 00:08:43.222 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0xa000 length 0xa000 00:08:43.222 Nvme1n1 : 5.57 160.92 10.06 0.00 0.00 756527.21 71168.41 700735.13 00:08:43.222 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x0 length 0x8000 00:08:43.222 Nvme2n1 : 5.62 159.51 9.97 0.00 0.00 745104.93 56008.28 720948.64 00:08:43.222 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x8000 length 0x8000 00:08:43.222 Nvme2n1 : 5.57 160.80 10.05 0.00 0.00 736917.40 74537.33 646832.42 00:08:43.222 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x0 length 0x8000 00:08:43.222 Nvme2n2 : 5.70 161.99 10.12 0.00 0.00 711893.53 40637.58 1037627.01 00:08:43.222 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x8000 length 0x8000 00:08:43.222 Nvme2n2 : 5.66 167.19 10.45 0.00 0.00 691813.37 33689.19 720948.64 00:08:43.222 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x0 length 0x8000 00:08:43.222 Nvme2n3 : 5.71 168.20 10.51 0.00 0.00 671384.19 42111.49 747899.99 00:08:43.222 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x8000 length 0x8000 00:08:43.222 Nvme2n3 : 5.70 165.51 10.34 0.00 0.00 681240.05 39374.24 1441897.28 00:08:43.222 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x0 length 0x2000 00:08:43.222 Nvme3n1 : 5.76 182.44 11.40 0.00 0.00 606374.09 9053.97 838860.80 00:08:43.222 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:43.222 Verification LBA range: start 0x2000 length 0x2000 00:08:43.222 Nvme3n1 : 5.78 185.55 11.60 0.00 0.00 594960.53 1210.71 1468848.63 00:08:43.222 [2024-11-15T11:12:20.623Z] =================================================================================================================== 00:08:43.222 [2024-11-15T11:12:20.623Z] Total : 1980.17 123.76 0.00 0.00 709351.22 1210.71 1468848.63 00:08:45.126 00:08:45.126 real 0m9.175s 00:08:45.126 user 0m16.917s 00:08:45.126 sys 0m0.472s 00:08:45.126 ************************************ 00:08:45.126 END TEST bdev_verify_big_io 00:08:45.126 ************************************ 00:08:45.126 11:12:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.126 11:12:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:45.126 11:12:22 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:45.126 11:12:22 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:45.126 11:12:22 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.126 11:12:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.126 ************************************ 00:08:45.126 START TEST bdev_write_zeroes 00:08:45.126 ************************************ 00:08:45.126 11:12:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:45.126 [2024-11-15 11:12:22.464726] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:45.126 [2024-11-15 11:12:22.464877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61825 ] 00:08:45.385 [2024-11-15 11:12:22.653508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.645 [2024-11-15 11:12:22.801114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.212 Running I/O for 1 seconds... 00:08:47.583 74112.00 IOPS, 289.50 MiB/s 00:08:47.583 Latency(us) 00:08:47.583 [2024-11-15T11:12:24.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.583 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.583 Nvme0n1 : 1.02 12309.56 48.08 0.00 0.00 10370.36 8790.77 30109.71 00:08:47.583 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.583 Nvme1n1 : 1.02 12297.03 48.04 0.00 0.00 10366.77 8896.05 30530.83 00:08:47.583 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.583 Nvme2n1 : 1.02 12285.06 47.99 0.00 0.00 10332.84 8738.13 27793.58 00:08:47.583 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.583 Nvme2n2 : 1.02 12324.05 48.14 0.00 0.00 10246.15 5869.29 22003.25 00:08:47.583 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.583 Nvme2n3 : 1.02 12312.04 48.09 0.00 0.00 10224.65 6053.53 20529.35 00:08:47.583 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.583 Nvme3n1 : 1.02 12300.70 48.05 0.00 0.00 10211.65 6264.08 19687.12 00:08:47.583 [2024-11-15T11:12:24.984Z] =================================================================================================================== 00:08:47.583 [2024-11-15T11:12:24.984Z] Total : 73828.45 288.39 0.00 0.00 10291.91 5869.29 30530.83 00:08:48.519 00:08:48.519 real 0m3.557s 00:08:48.519 user 0m3.079s 00:08:48.519 sys 0m0.362s 00:08:48.519 11:12:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:48.519 11:12:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:48.519 ************************************ 00:08:48.519 END TEST bdev_write_zeroes 00:08:48.520 ************************************ 00:08:48.779 11:12:25 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.779 11:12:25 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:48.779 11:12:25 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:48.779 11:12:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.779 ************************************ 00:08:48.779 START TEST bdev_json_nonenclosed 00:08:48.779 ************************************ 00:08:48.779 11:12:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.779 [2024-11-15 11:12:26.086736] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:48.779 [2024-11-15 11:12:26.086873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61883 ] 00:08:49.039 [2024-11-15 11:12:26.272429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.039 [2024-11-15 11:12:26.417605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.039 [2024-11-15 11:12:26.417818] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:49.039 [2024-11-15 11:12:26.417846] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:49.039 [2024-11-15 11:12:26.417860] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.607 00:08:49.607 real 0m0.715s 00:08:49.607 user 0m0.455s 00:08:49.607 sys 0m0.155s 00:08:49.607 11:12:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.607 11:12:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:49.607 ************************************ 00:08:49.607 END TEST bdev_json_nonenclosed 00:08:49.607 ************************************ 00:08:49.607 11:12:26 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:49.607 11:12:26 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:49.607 11:12:26 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.607 11:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.607 ************************************ 00:08:49.607 START TEST bdev_json_nonarray 00:08:49.607 ************************************ 00:08:49.607 11:12:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:49.607 [2024-11-15 11:12:26.882139] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:49.607 [2024-11-15 11:12:26.882290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61909 ] 00:08:49.865 [2024-11-15 11:12:27.072479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.865 [2024-11-15 11:12:27.231794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.865 [2024-11-15 11:12:27.231934] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:49.865 [2024-11-15 11:12:27.231961] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:49.865 [2024-11-15 11:12:27.231973] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.124 00:08:50.124 real 0m0.737s 00:08:50.124 user 0m0.451s 00:08:50.124 sys 0m0.179s 00:08:50.124 11:12:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.124 11:12:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:50.124 ************************************ 00:08:50.124 END TEST bdev_json_nonarray 00:08:50.124 ************************************ 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:50.383 11:12:27 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:50.383 00:08:50.383 real 0m45.464s 00:08:50.383 user 1m5.846s 00:08:50.383 sys 0m8.877s 00:08:50.383 11:12:27 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.383 ************************************ 00:08:50.383 END TEST blockdev_nvme 00:08:50.383 11:12:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.383 ************************************ 00:08:50.383 11:12:27 -- spdk/autotest.sh@209 -- # uname -s 00:08:50.383 11:12:27 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:50.383 11:12:27 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:50.383 11:12:27 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:50.383 11:12:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.383 11:12:27 -- common/autotest_common.sh@10 -- # set +x 00:08:50.383 ************************************ 00:08:50.383 START TEST blockdev_nvme_gpt 00:08:50.383 ************************************ 00:08:50.383 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:50.642 * Looking for test storage... 00:08:50.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.642 11:12:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:50.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.642 --rc genhtml_branch_coverage=1 00:08:50.642 --rc genhtml_function_coverage=1 00:08:50.642 --rc genhtml_legend=1 00:08:50.642 --rc geninfo_all_blocks=1 00:08:50.642 --rc geninfo_unexecuted_blocks=1 00:08:50.642 00:08:50.642 ' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:50.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.642 --rc genhtml_branch_coverage=1 00:08:50.642 --rc genhtml_function_coverage=1 00:08:50.642 --rc genhtml_legend=1 00:08:50.642 --rc geninfo_all_blocks=1 00:08:50.642 --rc geninfo_unexecuted_blocks=1 00:08:50.642 00:08:50.642 ' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:50.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.642 --rc genhtml_branch_coverage=1 00:08:50.642 --rc genhtml_function_coverage=1 00:08:50.642 --rc genhtml_legend=1 00:08:50.642 --rc geninfo_all_blocks=1 00:08:50.642 --rc geninfo_unexecuted_blocks=1 00:08:50.642 00:08:50.642 ' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:50.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.642 --rc genhtml_branch_coverage=1 00:08:50.642 --rc genhtml_function_coverage=1 00:08:50.642 --rc genhtml_legend=1 00:08:50.642 --rc geninfo_all_blocks=1 00:08:50.642 --rc geninfo_unexecuted_blocks=1 00:08:50.642 00:08:50.642 ' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61993 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:50.642 11:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61993 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 61993 ']' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.642 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.643 11:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.643 [2024-11-15 11:12:28.036651] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:08:50.643 [2024-11-15 11:12:28.036795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61993 ] 00:08:50.901 [2024-11-15 11:12:28.233224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.160 [2024-11-15 11:12:28.399806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.095 11:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.095 11:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:08:52.095 11:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:52.095 11:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:08:52.095 11:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:52.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.940 Waiting for block devices as requested 00:08:53.197 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.197 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.454 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.454 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.765 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:58.765 BYT; 00:08:58.765 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:58.765 BYT; 00:08:58.765 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:58.765 11:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:58.765 11:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:58.765 11:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:58.765 11:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:58.765 11:12:36 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:58.765 11:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:58.766 11:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:00.143 The operation has completed successfully. 00:09:00.143 11:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:01.079 The operation has completed successfully. 00:09:01.079 11:12:38 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:01.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:02.583 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.583 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.583 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.583 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.583 11:12:39 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:02.583 11:12:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.583 11:12:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.842 [] 00:09:02.842 11:12:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.842 11:12:39 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:02.842 11:12:39 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:02.842 11:12:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:02.842 11:12:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.842 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:02.842 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.842 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.100 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.100 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:09:03.100 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.100 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.100 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.100 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.100 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:03.360 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:03.360 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.360 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:03.360 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.360 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.360 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:03.360 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:03.361 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "67d3fc34-f6eb-4a7e-8ac2-55d1eae98ca7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "67d3fc34-f6eb-4a7e-8ac2-55d1eae98ca7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "305aa828-c930-42d6-9992-0be3bc4bb392"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "305aa828-c930-42d6-9992-0be3bc4bb392",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e53c2988-51bf-49b0-a533-448e5fb48656"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e53c2988-51bf-49b0-a533-448e5fb48656",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "eb4cd9bf-a804-48b2-a04e-4c31cc6723cc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eb4cd9bf-a804-48b2-a04e-4c31cc6723cc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "512a1527-86cd-41ec-b4fb-33c45021b9f4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "512a1527-86cd-41ec-b4fb-33c45021b9f4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:03.361 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:03.361 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:03.361 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:03.361 11:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61993 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 61993 ']' 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 61993 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61993 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:03.361 killing process with pid 61993 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61993' 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 61993 00:09:03.361 11:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 61993 00:09:06.678 11:12:43 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:06.678 11:12:43 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:06.678 11:12:43 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:09:06.678 11:12:43 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.679 11:12:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.679 ************************************ 00:09:06.679 START TEST bdev_hello_world 00:09:06.679 ************************************ 00:09:06.679 11:12:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:06.679 [2024-11-15 11:12:43.463210] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:06.679 [2024-11-15 11:12:43.463341] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62650 ] 00:09:06.679 [2024-11-15 11:12:43.646888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.679 [2024-11-15 11:12:43.791621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.246 [2024-11-15 11:12:44.537190] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:07.246 [2024-11-15 11:12:44.537254] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:07.246 [2024-11-15 11:12:44.537284] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:07.246 [2024-11-15 11:12:44.540550] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:07.246 [2024-11-15 11:12:44.541187] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:07.246 [2024-11-15 11:12:44.541223] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:07.246 [2024-11-15 11:12:44.541491] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:07.246 00:09:07.246 [2024-11-15 11:12:44.541517] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:08.633 00:09:08.633 real 0m2.447s 00:09:08.633 user 0m1.996s 00:09:08.633 sys 0m0.336s 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:08.633 ************************************ 00:09:08.633 END TEST bdev_hello_world 00:09:08.633 ************************************ 00:09:08.633 11:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:08.633 11:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:08.633 11:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.633 11:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.633 ************************************ 00:09:08.633 START TEST bdev_bounds 00:09:08.633 ************************************ 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62698 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.633 Process bdevio pid: 62698 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62698' 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62698 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 62698 ']' 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.633 11:12:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:08.633 [2024-11-15 11:12:45.978921] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:08.633 [2024-11-15 11:12:45.979053] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62698 ] 00:09:08.892 [2024-11-15 11:12:46.164140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.151 [2024-11-15 11:12:46.309604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.151 [2024-11-15 11:12:46.309740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.151 [2024-11-15 11:12:46.309789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.718 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.718 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:09:09.718 11:12:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:09.978 I/O targets: 00:09:09.978 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:09.978 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:09.978 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:09.978 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:09.978 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:09.978 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:09.978 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:09.978 00:09:09.978 00:09:09.978 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.978 http://cunit.sourceforge.net/ 00:09:09.978 00:09:09.978 00:09:09.978 Suite: bdevio tests on: Nvme3n1 00:09:09.978 Test: blockdev write read block ...passed 00:09:09.978 Test: blockdev write zeroes read block ...passed 00:09:09.978 Test: blockdev write zeroes read no split ...passed 00:09:09.978 Test: blockdev write zeroes read split ...passed 00:09:09.978 Test: blockdev write zeroes read split partial ...passed 00:09:09.978 Test: blockdev reset ...[2024-11-15 11:12:47.261988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:09.978 [2024-11-15 11:12:47.266839] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:09.978 passed 00:09:09.978 Test: blockdev write read 8 blocks ...passed 00:09:09.978 Test: blockdev write read size > 128k ...passed 00:09:09.978 Test: blockdev write read invalid size ...passed 00:09:09.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:09.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:09.978 Test: blockdev write read max offset ...passed 00:09:09.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:09.978 Test: blockdev writev readv 8 blocks ...passed 00:09:09.978 Test: blockdev writev readv 30 x 1block ...passed 00:09:09.978 Test: blockdev writev readv block ...passed 00:09:09.978 Test: blockdev writev readv size > 128k ...passed 00:09:09.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:09.978 Test: blockdev comparev and writev ...[2024-11-15 11:12:47.277457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb804000 len:0x1000 00:09:09.978 [2024-11-15 11:12:47.277663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:09.978 passed 00:09:09.978 Test: blockdev nvme passthru rw ...passed 00:09:09.978 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:12:47.278957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:09.978 [2024-11-15 11:12:47.279088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:09.978 passed 00:09:09.978 Test: blockdev nvme admin passthru ...passed 00:09:09.978 Test: blockdev copy ...passed 00:09:09.978 Suite: bdevio tests on: Nvme2n3 00:09:09.978 Test: blockdev write read block ...passed 00:09:09.978 Test: blockdev write zeroes read block ...passed 00:09:09.978 Test: blockdev write zeroes read no split ...passed 00:09:09.978 Test: blockdev write zeroes read split ...passed 00:09:09.978 Test: blockdev write zeroes read split partial ...passed 00:09:09.978 Test: blockdev reset ...[2024-11-15 11:12:47.356572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:09.978 [2024-11-15 11:12:47.361971] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:09.978 passed 00:09:09.978 Test: blockdev write read 8 blocks ...passed 00:09:09.978 Test: blockdev write read size > 128k ...passed 00:09:09.978 Test: blockdev write read invalid size ...passed 00:09:09.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:09.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:09.979 Test: blockdev write read max offset ...passed 00:09:09.979 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:09.979 Test: blockdev writev readv 8 blocks ...passed 00:09:09.979 Test: blockdev writev readv 30 x 1block ...passed 00:09:09.979 Test: blockdev writev readv block ...passed 00:09:09.979 Test: blockdev writev readv size > 128k ...passed 00:09:09.979 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:09.979 Test: blockdev comparev and writev ...[2024-11-15 11:12:47.376292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb802000 len:0x1000 00:09:09.979 [2024-11-15 11:12:47.376361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:09.979 passed 00:09:09.979 Test: blockdev nvme passthru rw ...passed 00:09:09.979 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:12:47.377309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:09.979 passed 00:09:09.979 Test: blockdev nvme admin passthru ...[2024-11-15 11:12:47.377348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.238 passed 00:09:10.238 Test: blockdev copy ...passed 00:09:10.238 Suite: bdevio tests on: Nvme2n2 00:09:10.238 Test: blockdev write read block ...passed 00:09:10.238 Test: blockdev write zeroes read block ...passed 00:09:10.238 Test: blockdev write zeroes read no split ...passed 00:09:10.238 Test: blockdev write zeroes read split ...passed 00:09:10.238 Test: blockdev write zeroes read split partial ...passed 00:09:10.238 Test: blockdev reset ...[2024-11-15 11:12:47.490272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:10.238 [2024-11-15 11:12:47.495437] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:10.238 passed 00:09:10.238 Test: blockdev write read 8 blocks ...passed 00:09:10.238 Test: blockdev write read size > 128k ...passed 00:09:10.238 Test: blockdev write read invalid size ...passed 00:09:10.238 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.238 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.238 Test: blockdev write read max offset ...passed 00:09:10.238 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.238 Test: blockdev writev readv 8 blocks ...passed 00:09:10.238 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.238 Test: blockdev writev readv block ...passed 00:09:10.238 Test: blockdev writev readv size > 128k ...passed 00:09:10.238 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.238 Test: blockdev comparev and writev ...[2024-11-15 11:12:47.504403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfe38000 len:0x1000 00:09:10.238 [2024-11-15 11:12:47.504489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.238 passed 00:09:10.238 Test: blockdev nvme passthru rw ...passed 00:09:10.238 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:12:47.505514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.238 [2024-11-15 11:12:47.505550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.238 passed 00:09:10.238 Test: blockdev nvme admin passthru ...passed 00:09:10.238 Test: blockdev copy ...passed 00:09:10.238 Suite: bdevio tests on: Nvme2n1 00:09:10.238 Test: blockdev write read block ...passed 00:09:10.238 Test: blockdev write zeroes read block ...passed 00:09:10.239 Test: blockdev write zeroes read no split ...passed 00:09:10.239 Test: blockdev write zeroes read split ...passed 00:09:10.239 Test: blockdev write zeroes read split partial ...passed 00:09:10.239 Test: blockdev reset ...[2024-11-15 11:12:47.587844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:10.239 [2024-11-15 11:12:47.593011] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:10.239 passed 00:09:10.239 Test: blockdev write read 8 blocks ...passed 00:09:10.239 Test: blockdev write read size > 128k ...passed 00:09:10.239 Test: blockdev write read invalid size ...passed 00:09:10.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.239 Test: blockdev write read max offset ...passed 00:09:10.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.239 Test: blockdev writev readv 8 blocks ...passed 00:09:10.239 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.239 Test: blockdev writev readv block ...passed 00:09:10.239 Test: blockdev writev readv size > 128k ...passed 00:09:10.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.239 Test: blockdev comparev and writev ...[2024-11-15 11:12:47.602401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfe34000 len:0x1000 00:09:10.239 [2024-11-15 11:12:47.602478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.239 passed 00:09:10.239 Test: blockdev nvme passthru rw ...passed 00:09:10.239 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.239 Test: blockdev nvme admin passthru ...[2024-11-15 11:12:47.603387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.239 [2024-11-15 11:12:47.603422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.239 passed 00:09:10.239 Test: blockdev copy ...passed 00:09:10.239 Suite: bdevio tests on: Nvme1n1p2 00:09:10.239 Test: blockdev write read block ...passed 00:09:10.239 Test: blockdev write zeroes read block ...passed 00:09:10.239 Test: blockdev write zeroes read no split ...passed 00:09:10.498 Test: blockdev write zeroes read split ...passed 00:09:10.498 Test: blockdev write zeroes read split partial ...passed 00:09:10.498 Test: blockdev reset ...[2024-11-15 11:12:47.690051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:10.498 [2024-11-15 11:12:47.694740] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:10.498 passed 00:09:10.498 Test: blockdev write read 8 blocks ...passed 00:09:10.498 Test: blockdev write read size > 128k ...passed 00:09:10.498 Test: blockdev write read invalid size ...passed 00:09:10.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.498 Test: blockdev write read max offset ...passed 00:09:10.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.498 Test: blockdev writev readv 8 blocks ...passed 00:09:10.498 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.498 Test: blockdev writev readv block ...passed 00:09:10.498 Test: blockdev writev readv size > 128k ...passed 00:09:10.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.498 Test: blockdev comparev and writev ...[2024-11-15 11:12:47.704583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cfe30000 len:0x1000 00:09:10.498 [2024-11-15 11:12:47.704651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.498 passed 00:09:10.498 Test: blockdev nvme passthru rw ...passed 00:09:10.498 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.498 Test: blockdev nvme admin passthru ...passed 00:09:10.498 Test: blockdev copy ...passed 00:09:10.498 Suite: bdevio tests on: Nvme1n1p1 00:09:10.498 Test: blockdev write read block ...passed 00:09:10.498 Test: blockdev write zeroes read block ...passed 00:09:10.498 Test: blockdev write zeroes read no split ...passed 00:09:10.498 Test: blockdev write zeroes read split ...passed 00:09:10.498 Test: blockdev write zeroes read split partial ...passed 00:09:10.498 Test: blockdev reset ...[2024-11-15 11:12:47.777533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:10.498 [2024-11-15 11:12:47.782169] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:10.498 passed 00:09:10.498 Test: blockdev write read 8 blocks ...passed 00:09:10.498 Test: blockdev write read size > 128k ...passed 00:09:10.498 Test: blockdev write read invalid size ...passed 00:09:10.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.498 Test: blockdev write read max offset ...passed 00:09:10.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.498 Test: blockdev writev readv 8 blocks ...passed 00:09:10.498 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.498 Test: blockdev writev readv block ...passed 00:09:10.498 Test: blockdev writev readv size > 128k ...passed 00:09:10.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.498 Test: blockdev comparev and writev ...[2024-11-15 11:12:47.791159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bc20e000 len:0x1000 00:09:10.498 [2024-11-15 11:12:47.791230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.498 passed 00:09:10.498 Test: blockdev nvme passthru rw ...passed 00:09:10.498 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.498 Test: blockdev nvme admin passthru ...passed 00:09:10.498 Test: blockdev copy ...passed 00:09:10.498 Suite: bdevio tests on: Nvme0n1 00:09:10.498 Test: blockdev write read block ...passed 00:09:10.498 Test: blockdev write zeroes read block ...passed 00:09:10.498 Test: blockdev write zeroes read no split ...passed 00:09:10.498 Test: blockdev write zeroes read split ...passed 00:09:10.498 Test: blockdev write zeroes read split partial ...passed 00:09:10.498 Test: blockdev reset ...[2024-11-15 11:12:47.865060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:10.498 [2024-11-15 11:12:47.869862] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:10.498 passed 00:09:10.498 Test: blockdev write read 8 blocks ...passed 00:09:10.498 Test: blockdev write read size > 128k ...passed 00:09:10.498 Test: blockdev write read invalid size ...passed 00:09:10.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.498 Test: blockdev write read max offset ...passed 00:09:10.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.498 Test: blockdev writev readv 8 blocks ...passed 00:09:10.498 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.498 Test: blockdev writev readv block ...passed 00:09:10.498 Test: blockdev writev readv size > 128k ...passed 00:09:10.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.498 Test: blockdev comparev and writev ...[2024-11-15 11:12:47.878281] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:10.498 separate metadata which is not supported yet. 00:09:10.498 passed 00:09:10.498 Test: blockdev nvme passthru rw ...passed 00:09:10.498 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.498 Test: blockdev nvme admin passthru ...[2024-11-15 11:12:47.879001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:10.498 [2024-11-15 11:12:47.879056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:10.498 passed 00:09:10.498 Test: blockdev copy ...passed 00:09:10.498 00:09:10.498 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.498 suites 7 7 n/a 0 0 00:09:10.498 tests 161 161 161 0 0 00:09:10.498 asserts 1025 1025 1025 0 n/a 00:09:10.498 00:09:10.498 Elapsed time = 1.900 seconds 00:09:10.498 0 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62698 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 62698 ']' 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 62698 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62698 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:10.757 killing process with pid 62698 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62698' 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 62698 00:09:10.757 11:12:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 62698 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:12.136 00:09:12.136 real 0m3.253s 00:09:12.136 user 0m8.242s 00:09:12.136 sys 0m0.542s 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:12.136 ************************************ 00:09:12.136 END TEST bdev_bounds 00:09:12.136 ************************************ 00:09:12.136 11:12:49 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:12.136 11:12:49 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:12.136 11:12:49 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:12.136 11:12:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:12.136 ************************************ 00:09:12.136 START TEST bdev_nbd 00:09:12.136 ************************************ 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62763 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62763 /var/tmp/spdk-nbd.sock 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 62763 ']' 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:12.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:12.136 11:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:12.136 [2024-11-15 11:12:49.322802] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:12.136 [2024-11-15 11:12:49.322936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.136 [2024-11-15 11:12:49.492128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.395 [2024-11-15 11:12:49.636645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.328 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.329 1+0 records in 00:09:13.329 1+0 records out 00:09:13.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651496 s, 6.3 MB/s 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.329 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.588 1+0 records in 00:09:13.588 1+0 records out 00:09:13.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00186422 s, 2.2 MB/s 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.588 11:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:13.846 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.847 1+0 records in 00:09:13.847 1+0 records out 00:09:13.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752327 s, 5.4 MB/s 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.847 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.106 1+0 records in 00:09:14.106 1+0 records out 00:09:14.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937762 s, 4.4 MB/s 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.106 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.366 1+0 records in 00:09:14.366 1+0 records out 00:09:14.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973483 s, 4.2 MB/s 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.366 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.626 1+0 records in 00:09:14.626 1+0 records out 00:09:14.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113129 s, 3.6 MB/s 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.626 11:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.885 1+0 records in 00:09:14.885 1+0 records out 00:09:14.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807873 s, 5.1 MB/s 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:14.885 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd0", 00:09:15.144 "bdev_name": "Nvme0n1" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd1", 00:09:15.144 "bdev_name": "Nvme1n1p1" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd2", 00:09:15.144 "bdev_name": "Nvme1n1p2" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd3", 00:09:15.144 "bdev_name": "Nvme2n1" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd4", 00:09:15.144 "bdev_name": "Nvme2n2" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd5", 00:09:15.144 "bdev_name": "Nvme2n3" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd6", 00:09:15.144 "bdev_name": "Nvme3n1" 00:09:15.144 } 00:09:15.144 ]' 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd0", 00:09:15.144 "bdev_name": "Nvme0n1" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd1", 00:09:15.144 "bdev_name": "Nvme1n1p1" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd2", 00:09:15.144 "bdev_name": "Nvme1n1p2" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd3", 00:09:15.144 "bdev_name": "Nvme2n1" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd4", 00:09:15.144 "bdev_name": "Nvme2n2" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd5", 00:09:15.144 "bdev_name": "Nvme2n3" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "nbd_device": "/dev/nbd6", 00:09:15.144 "bdev_name": "Nvme3n1" 00:09:15.144 } 00:09:15.144 ]' 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.144 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.402 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.660 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.661 11:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.920 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.179 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.438 11:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.698 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:16.957 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:17.216 /dev/nbd0 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.216 1+0 records in 00:09:17.216 1+0 records out 00:09:17.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674956 s, 6.1 MB/s 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.216 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:17.475 /dev/nbd1 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.475 1+0 records in 00:09:17.475 1+0 records out 00:09:17.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812386 s, 5.0 MB/s 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.475 11:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:17.733 /dev/nbd10 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.733 1+0 records in 00:09:17.733 1+0 records out 00:09:17.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648689 s, 6.3 MB/s 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.733 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:17.992 /dev/nbd11 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.992 1+0 records in 00:09:17.992 1+0 records out 00:09:17.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730107 s, 5.6 MB/s 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.992 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:18.251 /dev/nbd12 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.251 1+0 records in 00:09:18.251 1+0 records out 00:09:18.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00395105 s, 1.0 MB/s 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:18.251 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:18.510 /dev/nbd13 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.510 1+0 records in 00:09:18.510 1+0 records out 00:09:18.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698267 s, 5.9 MB/s 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:18.510 11:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:18.769 /dev/nbd14 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.769 1+0 records in 00:09:18.769 1+0 records out 00:09:18.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802683 s, 5.1 MB/s 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.769 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd0", 00:09:19.029 "bdev_name": "Nvme0n1" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd1", 00:09:19.029 "bdev_name": "Nvme1n1p1" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd10", 00:09:19.029 "bdev_name": "Nvme1n1p2" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd11", 00:09:19.029 "bdev_name": "Nvme2n1" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd12", 00:09:19.029 "bdev_name": "Nvme2n2" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd13", 00:09:19.029 "bdev_name": "Nvme2n3" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd14", 00:09:19.029 "bdev_name": "Nvme3n1" 00:09:19.029 } 00:09:19.029 ]' 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd0", 00:09:19.029 "bdev_name": "Nvme0n1" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd1", 00:09:19.029 "bdev_name": "Nvme1n1p1" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd10", 00:09:19.029 "bdev_name": "Nvme1n1p2" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd11", 00:09:19.029 "bdev_name": "Nvme2n1" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd12", 00:09:19.029 "bdev_name": "Nvme2n2" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd13", 00:09:19.029 "bdev_name": "Nvme2n3" 00:09:19.029 }, 00:09:19.029 { 00:09:19.029 "nbd_device": "/dev/nbd14", 00:09:19.029 "bdev_name": "Nvme3n1" 00:09:19.029 } 00:09:19.029 ]' 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:19.029 /dev/nbd1 00:09:19.029 /dev/nbd10 00:09:19.029 /dev/nbd11 00:09:19.029 /dev/nbd12 00:09:19.029 /dev/nbd13 00:09:19.029 /dev/nbd14' 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:19.029 /dev/nbd1 00:09:19.029 /dev/nbd10 00:09:19.029 /dev/nbd11 00:09:19.029 /dev/nbd12 00:09:19.029 /dev/nbd13 00:09:19.029 /dev/nbd14' 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:19.029 256+0 records in 00:09:19.029 256+0 records out 00:09:19.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111837 s, 93.8 MB/s 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.029 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:19.289 256+0 records in 00:09:19.289 256+0 records out 00:09:19.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139906 s, 7.5 MB/s 00:09:19.289 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.289 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:19.548 256+0 records in 00:09:19.548 256+0 records out 00:09:19.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149438 s, 7.0 MB/s 00:09:19.548 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.548 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:19.548 256+0 records in 00:09:19.548 256+0 records out 00:09:19.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148012 s, 7.1 MB/s 00:09:19.548 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.548 11:12:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:19.807 256+0 records in 00:09:19.807 256+0 records out 00:09:19.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145417 s, 7.2 MB/s 00:09:19.807 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.807 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:19.807 256+0 records in 00:09:19.807 256+0 records out 00:09:19.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150183 s, 7.0 MB/s 00:09:19.807 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.807 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:20.066 256+0 records in 00:09:20.066 256+0 records out 00:09:20.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138402 s, 7.6 MB/s 00:09:20.066 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.066 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:20.340 256+0 records in 00:09:20.340 256+0 records out 00:09:20.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156996 s, 6.7 MB/s 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.340 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.599 11:12:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:20.856 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:20.856 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:20.856 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.857 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:21.114 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:21.114 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.115 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.373 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.631 11:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.889 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:22.148 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:22.406 malloc_lvol_verify 00:09:22.406 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:22.664 bd2d203a-c070-4552-a863-882147d87fdb 00:09:22.664 11:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:22.664 9b07dd03-b3b4-4efe-9260-3186c9c965e3 00:09:22.664 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:22.922 /dev/nbd0 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:22.922 mke2fs 1.47.0 (5-Feb-2023) 00:09:22.922 Discarding device blocks: 0/4096 done 00:09:22.922 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:22.922 00:09:22.922 Allocating group tables: 0/1 done 00:09:22.922 Writing inode tables: 0/1 done 00:09:22.922 Creating journal (1024 blocks): done 00:09:22.922 Writing superblocks and filesystem accounting information: 0/1 done 00:09:22.922 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.922 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62763 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 62763 ']' 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 62763 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62763 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62763' 00:09:23.180 killing process with pid 62763 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 62763 00:09:23.180 11:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 62763 00:09:24.557 11:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:24.557 00:09:24.557 real 0m12.676s 00:09:24.557 user 0m16.184s 00:09:24.557 sys 0m5.300s 00:09:24.557 11:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.557 11:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:24.557 ************************************ 00:09:24.557 END TEST bdev_nbd 00:09:24.557 ************************************ 00:09:24.557 11:13:01 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:24.557 11:13:01 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:09:24.557 11:13:01 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:09:24.557 skipping fio tests on NVMe due to multi-ns failures. 00:09:24.557 11:13:01 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:24.557 11:13:01 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:24.557 11:13:01 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:24.557 11:13:01 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:24.557 11:13:01 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.557 11:13:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 ************************************ 00:09:24.816 START TEST bdev_verify 00:09:24.816 ************************************ 00:09:24.816 11:13:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:24.816 [2024-11-15 11:13:02.046151] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:24.816 [2024-11-15 11:13:02.046269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63190 ] 00:09:25.074 [2024-11-15 11:13:02.230553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.074 [2024-11-15 11:13:02.383303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.074 [2024-11-15 11:13:02.383331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.011 Running I/O for 5 seconds... 00:09:28.319 18048.00 IOPS, 70.50 MiB/s [2024-11-15T11:13:06.656Z] 18816.00 IOPS, 73.50 MiB/s [2024-11-15T11:13:07.593Z] 18752.00 IOPS, 73.25 MiB/s [2024-11-15T11:13:08.531Z] 19024.00 IOPS, 74.31 MiB/s [2024-11-15T11:13:08.531Z] 18867.20 IOPS, 73.70 MiB/s 00:09:31.130 Latency(us) 00:09:31.130 [2024-11-15T11:13:08.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.130 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.130 Verification LBA range: start 0x0 length 0xbd0bd 00:09:31.130 Nvme0n1 : 5.09 1346.61 5.26 0.00 0.00 94575.00 14949.58 82538.51 00:09:31.130 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.130 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:31.130 Nvme0n1 : 5.05 1293.54 5.05 0.00 0.00 98483.92 22424.37 96014.19 00:09:31.130 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.130 Verification LBA range: start 0x0 length 0x4ff80 00:09:31.130 Nvme1n1p1 : 5.09 1345.73 5.26 0.00 0.00 94490.31 16212.92 76221.79 00:09:31.130 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.130 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:31.130 Nvme1n1p1 : 5.11 1303.02 5.09 0.00 0.00 97746.08 18213.22 88434.12 00:09:31.130 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.130 Verification LBA range: start 0x0 length 0x4ff7f 00:09:31.131 Nvme1n1p2 : 5.09 1345.12 5.25 0.00 0.00 94357.00 16423.48 73273.99 00:09:31.131 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:31.131 Nvme1n1p2 : 5.11 1302.26 5.09 0.00 0.00 97588.97 19476.56 89276.35 00:09:31.131 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x0 length 0x80000 00:09:31.131 Nvme2n1 : 5.11 1353.29 5.29 0.00 0.00 93847.70 13001.92 71168.41 00:09:31.131 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x80000 length 0x80000 00:09:31.131 Nvme2n1 : 5.11 1301.80 5.09 0.00 0.00 97442.68 20318.79 88013.01 00:09:31.131 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x0 length 0x80000 00:09:31.131 Nvme2n2 : 5.11 1352.93 5.28 0.00 0.00 93725.84 13370.40 68220.61 00:09:31.131 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x80000 length 0x80000 00:09:31.131 Nvme2n2 : 5.11 1301.49 5.08 0.00 0.00 97282.51 19792.40 88855.24 00:09:31.131 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x0 length 0x80000 00:09:31.131 Nvme2n3 : 5.11 1352.60 5.28 0.00 0.00 93605.17 12475.53 71589.53 00:09:31.131 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x80000 length 0x80000 00:09:31.131 Nvme2n3 : 5.12 1301.17 5.08 0.00 0.00 97088.64 18739.61 91803.04 00:09:31.131 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x0 length 0x20000 00:09:31.131 Nvme3n1 : 5.11 1352.25 5.28 0.00 0.00 93511.55 12001.77 74116.22 00:09:31.131 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:31.131 Verification LBA range: start 0x20000 length 0x20000 00:09:31.131 Nvme3n1 : 5.12 1300.86 5.08 0.00 0.00 97005.79 17897.38 91381.92 00:09:31.131 [2024-11-15T11:13:08.532Z] =================================================================================================================== 00:09:31.131 [2024-11-15T11:13:08.532Z] Total : 18552.67 72.47 0.00 0.00 95733.73 12001.77 96014.19 00:09:33.035 00:09:33.035 real 0m8.028s 00:09:33.035 user 0m14.698s 00:09:33.035 sys 0m0.414s 00:09:33.035 11:13:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.035 11:13:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:33.035 ************************************ 00:09:33.035 END TEST bdev_verify 00:09:33.035 ************************************ 00:09:33.035 11:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:33.035 11:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:33.035 11:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.035 11:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:33.035 ************************************ 00:09:33.035 START TEST bdev_verify_big_io 00:09:33.035 ************************************ 00:09:33.035 11:13:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:33.035 [2024-11-15 11:13:10.172549] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:33.035 [2024-11-15 11:13:10.172685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63299 ] 00:09:33.035 [2024-11-15 11:13:10.354283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.294 [2024-11-15 11:13:10.507044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.294 [2024-11-15 11:13:10.507085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.230 Running I/O for 5 seconds... 00:09:40.118 410.00 IOPS, 25.62 MiB/s [2024-11-15T11:13:17.519Z] 3028.50 IOPS, 189.28 MiB/s [2024-11-15T11:13:17.519Z] 3549.00 IOPS, 221.81 MiB/s 00:09:40.118 Latency(us) 00:09:40.118 [2024-11-15T11:13:17.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.118 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x0 length 0xbd0b 00:09:40.118 Nvme0n1 : 5.86 105.94 6.62 0.00 0.00 1146073.75 27583.02 1704672.95 00:09:40.118 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:40.118 Nvme0n1 : 5.59 140.27 8.77 0.00 0.00 867976.27 31373.06 1017413.50 00:09:40.118 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x0 length 0x4ff8 00:09:40.118 Nvme1n1p1 : 5.86 110.34 6.90 0.00 0.00 1097202.07 57692.74 1441897.28 00:09:40.118 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:40.118 Nvme1n1p1 : 5.59 148.40 9.27 0.00 0.00 813438.15 88013.01 882656.75 00:09:40.118 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x0 length 0x4ff7 00:09:40.118 Nvme1n1p2 : 5.87 119.33 7.46 0.00 0.00 992499.14 69483.95 1071316.20 00:09:40.118 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:40.118 Nvme1n1p2 : 5.73 142.93 8.93 0.00 0.00 819215.46 78327.36 1495799.98 00:09:40.118 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x0 length 0x8000 00:09:40.118 Nvme2n1 : 5.87 119.94 7.50 0.00 0.00 962642.55 71168.41 1071316.20 00:09:40.118 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x8000 length 0x8000 00:09:40.118 Nvme2n1 : 5.73 146.70 9.17 0.00 0.00 783257.11 54323.82 1522751.33 00:09:40.118 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x0 length 0x8000 00:09:40.118 Nvme2n2 : 5.90 126.56 7.91 0.00 0.00 897908.98 23266.60 997199.99 00:09:40.118 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x8000 length 0x8000 00:09:40.118 Nvme2n2 : 5.79 151.28 9.45 0.00 0.00 739978.97 50533.78 1536227.01 00:09:40.118 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x0 length 0x8000 00:09:40.118 Nvme2n3 : 5.91 130.26 8.14 0.00 0.00 851032.59 6843.12 1078054.04 00:09:40.118 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x8000 length 0x8000 00:09:40.118 Nvme2n3 : 5.91 171.09 10.69 0.00 0.00 639711.38 17370.99 1158908.09 00:09:40.118 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x0 length 0x2000 00:09:40.118 Nvme3n1 : 5.91 134.24 8.39 0.00 0.00 805417.74 8001.18 1078054.04 00:09:40.118 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:40.118 Verification LBA range: start 0x2000 length 0x2000 00:09:40.118 Nvme3n1 : 5.93 180.95 11.31 0.00 0.00 592877.00 2579.33 1603605.38 00:09:40.118 [2024-11-15T11:13:17.519Z] =================================================================================================================== 00:09:40.118 [2024-11-15T11:13:17.519Z] Total : 1928.24 120.51 0.00 0.00 835875.60 2579.33 1704672.95 00:09:42.651 00:09:42.651 real 0m9.501s 00:09:42.651 user 0m17.579s 00:09:42.651 sys 0m0.469s 00:09:42.651 11:13:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.651 11:13:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:42.651 ************************************ 00:09:42.651 END TEST bdev_verify_big_io 00:09:42.651 ************************************ 00:09:42.651 11:13:19 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:42.651 11:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:42.651 11:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.651 11:13:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:42.651 ************************************ 00:09:42.651 START TEST bdev_write_zeroes 00:09:42.651 ************************************ 00:09:42.651 11:13:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:42.651 [2024-11-15 11:13:19.749518] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:42.651 [2024-11-15 11:13:19.750204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63415 ] 00:09:42.651 [2024-11-15 11:13:19.929969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.908 [2024-11-15 11:13:20.079843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.472 Running I/O for 1 seconds... 00:09:44.845 59968.00 IOPS, 234.25 MiB/s 00:09:44.845 Latency(us) 00:09:44.845 [2024-11-15T11:13:22.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.845 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:44.845 Nvme0n1 : 1.03 8537.21 33.35 0.00 0.00 14958.33 13054.56 27372.47 00:09:44.845 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:44.845 Nvme1n1p1 : 1.03 8527.29 33.31 0.00 0.00 14950.88 12949.28 27793.58 00:09:44.845 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:44.845 Nvme1n1p2 : 1.03 8517.77 33.27 0.00 0.00 14895.44 12528.17 25582.73 00:09:44.845 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:44.845 Nvme2n1 : 1.03 8509.42 33.24 0.00 0.00 14862.72 12686.09 23582.43 00:09:44.845 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:44.845 Nvme2n2 : 1.03 8501.05 33.21 0.00 0.00 14831.96 11580.66 22740.20 00:09:44.845 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:44.845 Nvme2n3 : 1.03 8492.66 33.17 0.00 0.00 14804.24 10264.67 24424.66 00:09:44.845 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:44.845 Nvme3n1 : 1.03 8422.46 32.90 0.00 0.00 14894.08 9896.20 25688.01 00:09:44.845 [2024-11-15T11:13:22.246Z] =================================================================================================================== 00:09:44.845 [2024-11-15T11:13:22.246Z] Total : 59507.87 232.45 0.00 0.00 14885.37 9896.20 27793.58 00:09:46.224 00:09:46.224 real 0m3.573s 00:09:46.224 user 0m3.086s 00:09:46.224 sys 0m0.369s 00:09:46.224 11:13:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:46.224 11:13:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:46.224 ************************************ 00:09:46.224 END TEST bdev_write_zeroes 00:09:46.224 ************************************ 00:09:46.224 11:13:23 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:46.224 11:13:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:46.224 11:13:23 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.224 11:13:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:46.224 ************************************ 00:09:46.224 START TEST bdev_json_nonenclosed 00:09:46.224 ************************************ 00:09:46.224 11:13:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:46.224 [2024-11-15 11:13:23.412939] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:46.224 [2024-11-15 11:13:23.413103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63473 ] 00:09:46.224 [2024-11-15 11:13:23.592019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.483 [2024-11-15 11:13:23.744057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.483 [2024-11-15 11:13:23.744188] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:46.483 [2024-11-15 11:13:23.744213] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:46.483 [2024-11-15 11:13:23.744228] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:46.741 00:09:46.741 real 0m0.724s 00:09:46.741 user 0m0.459s 00:09:46.741 sys 0m0.159s 00:09:46.741 11:13:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:46.741 11:13:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:46.741 ************************************ 00:09:46.741 END TEST bdev_json_nonenclosed 00:09:46.741 ************************************ 00:09:46.741 11:13:24 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:46.741 11:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:46.741 11:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.741 11:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:46.741 ************************************ 00:09:46.741 START TEST bdev_json_nonarray 00:09:46.741 ************************************ 00:09:46.742 11:13:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:47.001 [2024-11-15 11:13:24.202638] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:47.001 [2024-11-15 11:13:24.202770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63504 ] 00:09:47.001 [2024-11-15 11:13:24.387649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.260 [2024-11-15 11:13:24.552308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.260 [2024-11-15 11:13:24.552472] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:47.260 [2024-11-15 11:13:24.552502] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:47.260 [2024-11-15 11:13:24.552518] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.517 00:09:47.517 real 0m0.750s 00:09:47.517 user 0m0.472s 00:09:47.517 sys 0m0.172s 00:09:47.517 11:13:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.517 11:13:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:47.517 ************************************ 00:09:47.517 END TEST bdev_json_nonarray 00:09:47.517 ************************************ 00:09:47.776 11:13:24 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:09:47.776 11:13:24 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:09:47.776 11:13:24 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:47.776 11:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:47.776 11:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.776 11:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:47.776 ************************************ 00:09:47.776 START TEST bdev_gpt_uuid 00:09:47.776 ************************************ 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63530 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63530 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63530 ']' 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.776 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.777 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.777 11:13:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:47.777 [2024-11-15 11:13:25.046968] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:09:47.777 [2024-11-15 11:13:25.047108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63530 ] 00:09:48.036 [2024-11-15 11:13:25.230229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.036 [2024-11-15 11:13:25.375856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.414 Some configs were skipped because the RPC state that can call them passed over. 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:09:49.414 { 00:09:49.414 "name": "Nvme1n1p1", 00:09:49.414 "aliases": [ 00:09:49.414 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:49.414 ], 00:09:49.414 "product_name": "GPT Disk", 00:09:49.414 "block_size": 4096, 00:09:49.414 "num_blocks": 655104, 00:09:49.414 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:49.414 "assigned_rate_limits": { 00:09:49.414 "rw_ios_per_sec": 0, 00:09:49.414 "rw_mbytes_per_sec": 0, 00:09:49.414 "r_mbytes_per_sec": 0, 00:09:49.414 "w_mbytes_per_sec": 0 00:09:49.414 }, 00:09:49.414 "claimed": false, 00:09:49.414 "zoned": false, 00:09:49.414 "supported_io_types": { 00:09:49.414 "read": true, 00:09:49.414 "write": true, 00:09:49.414 "unmap": true, 00:09:49.414 "flush": true, 00:09:49.414 "reset": true, 00:09:49.414 "nvme_admin": false, 00:09:49.414 "nvme_io": false, 00:09:49.414 "nvme_io_md": false, 00:09:49.414 "write_zeroes": true, 00:09:49.414 "zcopy": false, 00:09:49.414 "get_zone_info": false, 00:09:49.414 "zone_management": false, 00:09:49.414 "zone_append": false, 00:09:49.414 "compare": true, 00:09:49.414 "compare_and_write": false, 00:09:49.414 "abort": true, 00:09:49.414 "seek_hole": false, 00:09:49.414 "seek_data": false, 00:09:49.414 "copy": true, 00:09:49.414 "nvme_iov_md": false 00:09:49.414 }, 00:09:49.414 "driver_specific": { 00:09:49.414 "gpt": { 00:09:49.414 "base_bdev": "Nvme1n1", 00:09:49.414 "offset_blocks": 256, 00:09:49.414 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:49.414 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:49.414 "partition_name": "SPDK_TEST_first" 00:09:49.414 } 00:09:49.414 } 00:09:49.414 } 00:09:49.414 ]' 00:09:49.414 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:09:49.674 { 00:09:49.674 "name": "Nvme1n1p2", 00:09:49.674 "aliases": [ 00:09:49.674 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:49.674 ], 00:09:49.674 "product_name": "GPT Disk", 00:09:49.674 "block_size": 4096, 00:09:49.674 "num_blocks": 655103, 00:09:49.674 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:49.674 "assigned_rate_limits": { 00:09:49.674 "rw_ios_per_sec": 0, 00:09:49.674 "rw_mbytes_per_sec": 0, 00:09:49.674 "r_mbytes_per_sec": 0, 00:09:49.674 "w_mbytes_per_sec": 0 00:09:49.674 }, 00:09:49.674 "claimed": false, 00:09:49.674 "zoned": false, 00:09:49.674 "supported_io_types": { 00:09:49.674 "read": true, 00:09:49.674 "write": true, 00:09:49.674 "unmap": true, 00:09:49.674 "flush": true, 00:09:49.674 "reset": true, 00:09:49.674 "nvme_admin": false, 00:09:49.674 "nvme_io": false, 00:09:49.674 "nvme_io_md": false, 00:09:49.674 "write_zeroes": true, 00:09:49.674 "zcopy": false, 00:09:49.674 "get_zone_info": false, 00:09:49.674 "zone_management": false, 00:09:49.674 "zone_append": false, 00:09:49.674 "compare": true, 00:09:49.674 "compare_and_write": false, 00:09:49.674 "abort": true, 00:09:49.674 "seek_hole": false, 00:09:49.674 "seek_data": false, 00:09:49.674 "copy": true, 00:09:49.674 "nvme_iov_md": false 00:09:49.674 }, 00:09:49.674 "driver_specific": { 00:09:49.674 "gpt": { 00:09:49.674 "base_bdev": "Nvme1n1", 00:09:49.674 "offset_blocks": 655360, 00:09:49.674 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:49.674 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:49.674 "partition_name": "SPDK_TEST_second" 00:09:49.674 } 00:09:49.674 } 00:09:49.674 } 00:09:49.674 ]' 00:09:49.674 11:13:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:09:49.674 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:09:49.674 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:09:49.674 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63530 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63530 ']' 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63530 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63530 00:09:49.933 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:49.934 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:49.934 killing process with pid 63530 00:09:49.934 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63530' 00:09:49.934 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63530 00:09:49.934 11:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63530 00:09:52.495 00:09:52.495 real 0m4.891s 00:09:52.495 user 0m4.857s 00:09:52.495 sys 0m0.772s 00:09:52.495 11:13:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.495 11:13:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:52.495 ************************************ 00:09:52.495 END TEST bdev_gpt_uuid 00:09:52.495 ************************************ 00:09:52.495 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:09:52.495 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:52.495 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:09:52.495 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:52.495 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:52.753 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:52.753 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:52.753 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:52.753 11:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:53.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.577 Waiting for block devices as requested 00:09:53.577 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.577 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.836 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.836 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.113 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:59.113 11:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:59.113 11:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:59.113 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:59.113 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:59.113 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:59.113 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:59.113 11:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:59.113 00:09:59.113 real 1m8.837s 00:09:59.113 user 1m24.704s 00:09:59.113 sys 0m13.445s 00:09:59.113 11:13:36 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.113 11:13:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:59.113 ************************************ 00:09:59.113 END TEST blockdev_nvme_gpt 00:09:59.113 ************************************ 00:09:59.372 11:13:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:59.372 11:13:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.372 11:13:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.372 11:13:36 -- common/autotest_common.sh@10 -- # set +x 00:09:59.372 ************************************ 00:09:59.372 START TEST nvme 00:09:59.372 ************************************ 00:09:59.372 11:13:36 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:59.372 * Looking for test storage... 00:09:59.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.372 11:13:36 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:59.372 11:13:36 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:09:59.372 11:13:36 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:59.629 11:13:36 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:59.629 11:13:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.629 11:13:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.629 11:13:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.629 11:13:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.629 11:13:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.629 11:13:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.629 11:13:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.629 11:13:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.629 11:13:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.629 11:13:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.629 11:13:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.629 11:13:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:59.629 11:13:36 nvme -- scripts/common.sh@345 -- # : 1 00:09:59.629 11:13:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.629 11:13:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.629 11:13:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:59.629 11:13:36 nvme -- scripts/common.sh@353 -- # local d=1 00:09:59.629 11:13:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.629 11:13:36 nvme -- scripts/common.sh@355 -- # echo 1 00:09:59.630 11:13:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.630 11:13:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:59.630 11:13:36 nvme -- scripts/common.sh@353 -- # local d=2 00:09:59.630 11:13:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.630 11:13:36 nvme -- scripts/common.sh@355 -- # echo 2 00:09:59.630 11:13:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.630 11:13:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.630 11:13:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.630 11:13:36 nvme -- scripts/common.sh@368 -- # return 0 00:09:59.630 11:13:36 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.630 11:13:36 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:59.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.630 --rc genhtml_branch_coverage=1 00:09:59.630 --rc genhtml_function_coverage=1 00:09:59.630 --rc genhtml_legend=1 00:09:59.630 --rc geninfo_all_blocks=1 00:09:59.630 --rc geninfo_unexecuted_blocks=1 00:09:59.630 00:09:59.630 ' 00:09:59.630 11:13:36 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:59.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.630 --rc genhtml_branch_coverage=1 00:09:59.630 --rc genhtml_function_coverage=1 00:09:59.630 --rc genhtml_legend=1 00:09:59.630 --rc geninfo_all_blocks=1 00:09:59.630 --rc geninfo_unexecuted_blocks=1 00:09:59.630 00:09:59.630 ' 00:09:59.630 11:13:36 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:59.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.630 --rc genhtml_branch_coverage=1 00:09:59.630 --rc genhtml_function_coverage=1 00:09:59.630 --rc genhtml_legend=1 00:09:59.630 --rc geninfo_all_blocks=1 00:09:59.630 --rc geninfo_unexecuted_blocks=1 00:09:59.630 00:09:59.630 ' 00:09:59.630 11:13:36 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:59.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.630 --rc genhtml_branch_coverage=1 00:09:59.630 --rc genhtml_function_coverage=1 00:09:59.630 --rc genhtml_legend=1 00:09:59.630 --rc geninfo_all_blocks=1 00:09:59.630 --rc geninfo_unexecuted_blocks=1 00:09:59.630 00:09:59.630 ' 00:09:59.630 11:13:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:00.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.129 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.129 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.129 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.129 11:13:38 nvme -- nvme/nvme.sh@79 -- # uname 00:10:01.129 11:13:38 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:01.129 11:13:38 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:01.129 11:13:38 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1073 -- # stubpid=64204 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:01.129 Waiting for stub to ready for secondary processes... 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64204 ]] 00:10:01.129 11:13:38 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:10:01.129 [2024-11-15 11:13:38.485808] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:10:01.129 [2024-11-15 11:13:38.485962] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:02.062 11:13:39 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:02.062 11:13:39 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64204 ]] 00:10:02.062 11:13:39 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:10:02.997 [2024-11-15 11:13:40.162739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.997 [2024-11-15 11:13:40.281636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.997 [2024-11-15 11:13:40.281734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.997 [2024-11-15 11:13:40.281778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.997 [2024-11-15 11:13:40.305311] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:02.997 [2024-11-15 11:13:40.305374] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.997 [2024-11-15 11:13:40.315331] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:02.997 [2024-11-15 11:13:40.315477] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:02.997 [2024-11-15 11:13:40.318471] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.997 [2024-11-15 11:13:40.318690] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:02.997 [2024-11-15 11:13:40.318761] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:02.997 [2024-11-15 11:13:40.321464] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.997 [2024-11-15 11:13:40.321655] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:02.997 [2024-11-15 11:13:40.321724] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:02.997 [2024-11-15 11:13:40.324167] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.997 [2024-11-15 11:13:40.324329] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:02.997 [2024-11-15 11:13:40.324390] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:02.997 [2024-11-15 11:13:40.324438] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:02.997 [2024-11-15 11:13:40.324482] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:03.256 11:13:40 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:03.256 done. 00:10:03.256 11:13:40 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:10:03.256 11:13:40 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:03.256 11:13:40 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:10:03.256 11:13:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.256 11:13:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.256 ************************************ 00:10:03.256 START TEST nvme_reset 00:10:03.256 ************************************ 00:10:03.256 11:13:40 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:03.514 Initializing NVMe Controllers 00:10:03.514 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:03.514 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:03.514 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:03.514 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:03.514 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:03.514 ************************************ 00:10:03.514 END TEST nvme_reset 00:10:03.514 ************************************ 00:10:03.514 00:10:03.514 real 0m0.314s 00:10:03.514 user 0m0.096s 00:10:03.514 sys 0m0.167s 00:10:03.514 11:13:40 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.514 11:13:40 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:03.514 11:13:40 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:03.514 11:13:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:03.514 11:13:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.514 11:13:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.514 ************************************ 00:10:03.514 START TEST nvme_identify 00:10:03.514 ************************************ 00:10:03.514 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:10:03.514 11:13:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:03.514 11:13:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:03.514 11:13:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:03.514 11:13:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:03.514 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:03.514 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:10:03.514 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:03.514 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:03.514 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:03.772 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:03.772 11:13:40 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:03.772 11:13:40 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:04.034 ===================================================== 00:10:04.034 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.034 ===================================================== 00:10:04.034 Controller Capabilities/Features 00:10:04.034 ================================ 00:10:04.034 Vendor ID: 1b36 00:10:04.034 Subsystem Vendor ID: 1af4 00:10:04.034 Serial Number: 12340 00:10:04.034 Model Number: QEMU NVMe Ctrl 00:10:04.034 Firmware Version: 8.0.0 00:10:04.034 Recommended Arb Burst: 6 00:10:04.034 IEEE OUI Identifier: 00 54 52 00:10:04.034 Multi-path I/O 00:10:04.034 May have multiple subsystem ports: No 00:10:04.034 May have multiple controllers: No 00:10:04.034 Associated with SR-IOV VF: No 00:10:04.034 Max Data Transfer Size: 524288 00:10:04.034 Max Number of Namespaces: 256 00:10:04.034 Max Number of I/O Queues: 64 00:10:04.034 NVMe Specification Version (VS): 1.4 00:10:04.034 NVMe Specification Version (Identify): 1.4 00:10:04.034 Maximum Queue Entries: 2048 00:10:04.034 Contiguous Queues Required: Yes 00:10:04.034 Arbitration Mechanisms Supported 00:10:04.034 Weighted Round Robin: Not Supported 00:10:04.034 Vendor Specific: Not Supported 00:10:04.034 Reset Timeout: 7500 ms 00:10:04.034 Doorbell Stride: 4 bytes 00:10:04.034 NVM Subsystem Reset: Not Supported 00:10:04.034 Command Sets Supported 00:10:04.034 NVM Command Set: Supported 00:10:04.034 Boot Partition: Not Supported 00:10:04.034 Memory Page Size Minimum: 4096 bytes 00:10:04.034 Memory Page Size Maximum: 65536 bytes 00:10:04.034 Persistent Memory Region: Not Supported 00:10:04.034 Optional Asynchronous Events Supported 00:10:04.034 Namespace Attribute Notices: Supported 00:10:04.034 Firmware Activation Notices: Not Supported 00:10:04.034 ANA Change Notices: Not Supported 00:10:04.034 PLE Aggregate Log Change Notices: Not Supported 00:10:04.034 LBA Status Info Alert Notices: Not Supported 00:10:04.034 EGE Aggregate Log Change Notices: Not Supported 00:10:04.034 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.034 Zone Descriptor Change Notices: Not Supported 00:10:04.034 Discovery Log Change Notices: Not Supported 00:10:04.034 Controller Attributes 00:10:04.034 128-bit Host Identifier: Not Supported 00:10:04.034 Non-Operational Permissive Mode: Not Supported 00:10:04.034 NVM Sets: Not Supported 00:10:04.034 Read Recovery Levels: Not Supported 00:10:04.034 Endurance Groups: Not Supported 00:10:04.034 Predictable Latency Mode: Not Supported 00:10:04.034 Traffic Based Keep ALive: Not Supported 00:10:04.034 Namespace Granularity: Not Supported 00:10:04.034 SQ Associations: Not Supported 00:10:04.034 UUID List: Not Supported 00:10:04.034 Multi-Domain Subsystem: Not Supported 00:10:04.034 Fixed Capacity Management: Not Supported 00:10:04.034 Variable Capacity Management: Not Supported 00:10:04.034 Delete Endurance Group: Not Supported 00:10:04.034 Delete NVM Set: Not Supported 00:10:04.034 Extended LBA Formats Supported: Supported 00:10:04.034 Flexible Data Placement Supported: Not Supported 00:10:04.034 00:10:04.034 Controller Memory Buffer Support 00:10:04.034 ================================ 00:10:04.034 Supported: No 00:10:04.034 00:10:04.034 Persistent Memory Region Support 00:10:04.034 ================================ 00:10:04.034 Supported: No 00:10:04.034 00:10:04.034 Admin Command Set Attributes 00:10:04.034 ============================ 00:10:04.034 Security Send/Receive: Not Supported 00:10:04.034 Format NVM: Supported 00:10:04.035 Firmware Activate/Download: Not Supported 00:10:04.035 Namespace Management: Supported 00:10:04.035 Device Self-Test: Not Supported 00:10:04.035 Directives: Supported 00:10:04.035 NVMe-MI: Not Supported 00:10:04.035 Virtualization Management: Not Supported 00:10:04.035 Doorbell Buffer Config: Supported 00:10:04.035 Get LBA Status Capability: Not Supported 00:10:04.035 Command & Feature Lockdown Capability: Not Supported 00:10:04.035 Abort Command Limit: 4 00:10:04.035 Async Event Request Limit: 4 00:10:04.035 Number of Firmware Slots: N/A 00:10:04.035 Firmware Slot 1 Read-Only: N/A 00:10:04.035 Firmware Activation Without Reset: N/A 00:10:04.035 Multiple Update Detection Support: N/A 00:10:04.035 Firmware Update Granularity: No Information Provided 00:10:04.035 Per-Namespace SMART Log: Yes 00:10:04.035 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.035 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:04.035 Command Effects Log Page: Supported 00:10:04.035 Get Log Page Extended Data: Supported 00:10:04.035 Telemetry Log Pages: Not Supported 00:10:04.035 Persistent Event Log Pages: Not Supported 00:10:04.035 Supported Log Pages Log Page: May Support 00:10:04.035 Commands Supported & Effects Log Page: Not Supported 00:10:04.035 Feature Identifiers & Effects Log Page:May Support 00:10:04.035 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.035 Data Area 4 for Telemetry Log: Not Supported 00:10:04.035 Error Log Page Entries Supported: 1 00:10:04.035 Keep Alive: Not Supported 00:10:04.035 00:10:04.035 NVM Command Set Attributes 00:10:04.035 ========================== 00:10:04.035 Submission Queue Entry Size 00:10:04.035 Max: 64 00:10:04.035 Min: 64 00:10:04.035 Completion Queue Entry Size 00:10:04.035 Max: 16 00:10:04.035 Min: 16 00:10:04.035 Number of Namespaces: 256 00:10:04.035 Compare Command: Supported 00:10:04.035 Write Uncorrectable Command: Not Supported 00:10:04.035 Dataset Management Command: Supported 00:10:04.035 Write Zeroes Command: Supported 00:10:04.035 Set Features Save Field: Supported 00:10:04.035 Reservations: Not Supported 00:10:04.035 Timestamp: Supported 00:10:04.035 Copy: Supported 00:10:04.035 Volatile Write Cache: Present 00:10:04.035 Atomic Write Unit (Normal): 1 00:10:04.035 Atomic Write Unit (PFail): 1 00:10:04.035 Atomic Compare & Write Unit: 1 00:10:04.035 Fused Compare & Write: Not Supported 00:10:04.035 Scatter-Gather List 00:10:04.035 SGL Command Set: Supported 00:10:04.035 SGL Keyed: Not Supported 00:10:04.035 SGL Bit Bucket Descriptor: Not Supported 00:10:04.035 SGL Metadata Pointer: Not Supported 00:10:04.035 Oversized SGL: Not Supported 00:10:04.035 SGL Metadata Address: Not Supported 00:10:04.035 SGL Offset: Not Supported 00:10:04.035 Transport SGL Data Block: Not Supported 00:10:04.035 Replay Protected Memory Block: Not Supported 00:10:04.035 00:10:04.035 Firmware Slot Information 00:10:04.035 ========================= 00:10:04.035 Active slot: 1 00:10:04.035 Slot 1 Firmware Revision: 1.0 00:10:04.035 00:10:04.035 00:10:04.035 Commands Supported and Effects 00:10:04.035 ============================== 00:10:04.035 Admin Commands 00:10:04.035 -------------- 00:10:04.035 Delete I/O Submission Queue (00h): Supported 00:10:04.035 Create I/O Submission Queue (01h): Supported 00:10:04.035 Get Log Page (02h): Supported 00:10:04.035 Delete I/O Completion Queue (04h): Supported 00:10:04.035 Create I/O Completion Queue (05h): Supported 00:10:04.035 Identify (06h): Supported 00:10:04.035 Abort (08h): Supported 00:10:04.035 Set Features (09h): Supported 00:10:04.035 Get Features (0Ah): Supported 00:10:04.035 Asynchronous Event Request (0Ch): Supported 00:10:04.035 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.035 Directive Send (19h): Supported 00:10:04.035 Directive Receive (1Ah): Supported 00:10:04.035 Virtualization Management (1Ch): Supported 00:10:04.035 Doorbell Buffer Config (7Ch): Supported 00:10:04.035 Format NVM (80h): Supported LBA-Change 00:10:04.035 I/O Commands 00:10:04.035 ------------ 00:10:04.035 Flush (00h): Supported LBA-Change 00:10:04.035 Write (01h): Supported LBA-Change 00:10:04.035 Read (02h): Supported 00:10:04.035 Compare (05h): Supported 00:10:04.035 Write Zeroes (08h): Supported LBA-Change 00:10:04.035 Dataset Management (09h): Supported LBA-Change 00:10:04.035 Unknown (0Ch): Supported 00:10:04.035 Unknown (12h): Supported 00:10:04.035 Copy (19h): Supported LBA-Change 00:10:04.035 Unknown (1Dh): Supported LBA-Change 00:10:04.035 00:10:04.035 Error Log 00:10:04.035 ========= 00:10:04.035 00:10:04.035 Arbitration 00:10:04.035 =========== 00:10:04.035 Arbitration Burst: no limit 00:10:04.035 00:10:04.035 Power Management 00:10:04.035 ================ 00:10:04.035 Number of Power States: 1 00:10:04.035 Current Power State: Power State #0 00:10:04.035 Power State #0: 00:10:04.035 Max Power: 25.00 W 00:10:04.035 Non-Operational State: Operational 00:10:04.035 Entry Latency: 16 microseconds 00:10:04.035 Exit Latency: 4 microseconds 00:10:04.035 Relative Read Throughput: 0 00:10:04.035 Relative Read Latency: 0 00:10:04.035 Relative Write Throughput: 0 00:10:04.035 Relative Write Latency: 0 00:10:04.035 Idle Power[2024-11-15 11:13:41.218959] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64239 terminated unexpected 00:10:04.035 [2024-11-15 11:13:41.220031] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64239 terminated unexpected 00:10:04.035 : Not Reported 00:10:04.035 Active Power: Not Reported 00:10:04.035 Non-Operational Permissive Mode: Not Supported 00:10:04.035 00:10:04.035 Health Information 00:10:04.035 ================== 00:10:04.035 Critical Warnings: 00:10:04.035 Available Spare Space: OK 00:10:04.035 Temperature: OK 00:10:04.035 Device Reliability: OK 00:10:04.035 Read Only: No 00:10:04.035 Volatile Memory Backup: OK 00:10:04.035 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.035 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.035 Available Spare: 0% 00:10:04.035 Available Spare Threshold: 0% 00:10:04.035 Life Percentage Used: 0% 00:10:04.035 Data Units Read: 730 00:10:04.035 Data Units Written: 658 00:10:04.035 Host Read Commands: 34821 00:10:04.035 Host Write Commands: 34607 00:10:04.035 Controller Busy Time: 0 minutes 00:10:04.035 Power Cycles: 0 00:10:04.035 Power On Hours: 0 hours 00:10:04.035 Unsafe Shutdowns: 0 00:10:04.035 Unrecoverable Media Errors: 0 00:10:04.035 Lifetime Error Log Entries: 0 00:10:04.035 Warning Temperature Time: 0 minutes 00:10:04.035 Critical Temperature Time: 0 minutes 00:10:04.035 00:10:04.035 Number of Queues 00:10:04.035 ================ 00:10:04.035 Number of I/O Submission Queues: 64 00:10:04.035 Number of I/O Completion Queues: 64 00:10:04.035 00:10:04.035 ZNS Specific Controller Data 00:10:04.035 ============================ 00:10:04.035 Zone Append Size Limit: 0 00:10:04.035 00:10:04.035 00:10:04.035 Active Namespaces 00:10:04.035 ================= 00:10:04.035 Namespace ID:1 00:10:04.035 Error Recovery Timeout: Unlimited 00:10:04.035 Command Set Identifier: NVM (00h) 00:10:04.035 Deallocate: Supported 00:10:04.035 Deallocated/Unwritten Error: Supported 00:10:04.035 Deallocated Read Value: All 0x00 00:10:04.035 Deallocate in Write Zeroes: Not Supported 00:10:04.035 Deallocated Guard Field: 0xFFFF 00:10:04.035 Flush: Supported 00:10:04.035 Reservation: Not Supported 00:10:04.035 Metadata Transferred as: Separate Metadata Buffer 00:10:04.035 Namespace Sharing Capabilities: Private 00:10:04.035 Size (in LBAs): 1548666 (5GiB) 00:10:04.035 Capacity (in LBAs): 1548666 (5GiB) 00:10:04.035 Utilization (in LBAs): 1548666 (5GiB) 00:10:04.035 Thin Provisioning: Not Supported 00:10:04.035 Per-NS Atomic Units: No 00:10:04.035 Maximum Single Source Range Length: 128 00:10:04.035 Maximum Copy Length: 128 00:10:04.035 Maximum Source Range Count: 128 00:10:04.035 NGUID/EUI64 Never Reused: No 00:10:04.035 Namespace Write Protected: No 00:10:04.035 Number of LBA Formats: 8 00:10:04.035 Current LBA Format: LBA Format #07 00:10:04.035 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.035 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.035 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.035 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.035 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.035 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.035 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.035 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.035 00:10:04.035 NVM Specific Namespace Data 00:10:04.035 =========================== 00:10:04.035 Logical Block Storage Tag Mask: 0 00:10:04.035 Protection Information Capabilities: 00:10:04.035 16b Guard Protection Information Storage Tag Support: No 00:10:04.036 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.036 Storage Tag Check Read Support: No 00:10:04.036 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.036 ===================================================== 00:10:04.036 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.036 ===================================================== 00:10:04.036 Controller Capabilities/Features 00:10:04.036 ================================ 00:10:04.036 Vendor ID: 1b36 00:10:04.036 Subsystem Vendor ID: 1af4 00:10:04.036 Serial Number: 12341 00:10:04.036 Model Number: QEMU NVMe Ctrl 00:10:04.036 Firmware Version: 8.0.0 00:10:04.036 Recommended Arb Burst: 6 00:10:04.036 IEEE OUI Identifier: 00 54 52 00:10:04.036 Multi-path I/O 00:10:04.036 May have multiple subsystem ports: No 00:10:04.036 May have multiple controllers: No 00:10:04.036 Associated with SR-IOV VF: No 00:10:04.036 Max Data Transfer Size: 524288 00:10:04.036 Max Number of Namespaces: 256 00:10:04.036 Max Number of I/O Queues: 64 00:10:04.036 NVMe Specification Version (VS): 1.4 00:10:04.036 NVMe Specification Version (Identify): 1.4 00:10:04.036 Maximum Queue Entries: 2048 00:10:04.036 Contiguous Queues Required: Yes 00:10:04.036 Arbitration Mechanisms Supported 00:10:04.036 Weighted Round Robin: Not Supported 00:10:04.036 Vendor Specific: Not Supported 00:10:04.036 Reset Timeout: 7500 ms 00:10:04.036 Doorbell Stride: 4 bytes 00:10:04.036 NVM Subsystem Reset: Not Supported 00:10:04.036 Command Sets Supported 00:10:04.036 NVM Command Set: Supported 00:10:04.036 Boot Partition: Not Supported 00:10:04.036 Memory Page Size Minimum: 4096 bytes 00:10:04.036 Memory Page Size Maximum: 65536 bytes 00:10:04.036 Persistent Memory Region: Not Supported 00:10:04.036 Optional Asynchronous Events Supported 00:10:04.036 Namespace Attribute Notices: Supported 00:10:04.036 Firmware Activation Notices: Not Supported 00:10:04.036 ANA Change Notices: Not Supported 00:10:04.036 PLE Aggregate Log Change Notices: Not Supported 00:10:04.036 LBA Status Info Alert Notices: Not Supported 00:10:04.036 EGE Aggregate Log Change Notices: Not Supported 00:10:04.036 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.036 Zone Descriptor Change Notices: Not Supported 00:10:04.036 Discovery Log Change Notices: Not Supported 00:10:04.036 Controller Attributes 00:10:04.036 128-bit Host Identifier: Not Supported 00:10:04.036 Non-Operational Permissive Mode: Not Supported 00:10:04.036 NVM Sets: Not Supported 00:10:04.036 Read Recovery Levels: Not Supported 00:10:04.036 Endurance Groups: Not Supported 00:10:04.036 Predictable Latency Mode: Not Supported 00:10:04.036 Traffic Based Keep ALive: Not Supported 00:10:04.036 Namespace Granularity: Not Supported 00:10:04.036 SQ Associations: Not Supported 00:10:04.036 UUID List: Not Supported 00:10:04.036 Multi-Domain Subsystem: Not Supported 00:10:04.036 Fixed Capacity Management: Not Supported 00:10:04.036 Variable Capacity Management: Not Supported 00:10:04.036 Delete Endurance Group: Not Supported 00:10:04.036 Delete NVM Set: Not Supported 00:10:04.036 Extended LBA Formats Supported: Supported 00:10:04.036 Flexible Data Placement Supported: Not Supported 00:10:04.036 00:10:04.036 Controller Memory Buffer Support 00:10:04.036 ================================ 00:10:04.036 Supported: No 00:10:04.036 00:10:04.036 Persistent Memory Region Support 00:10:04.036 ================================ 00:10:04.036 Supported: No 00:10:04.036 00:10:04.036 Admin Command Set Attributes 00:10:04.036 ============================ 00:10:04.036 Security Send/Receive: Not Supported 00:10:04.036 Format NVM: Supported 00:10:04.036 Firmware Activate/Download: Not Supported 00:10:04.036 Namespace Management: Supported 00:10:04.036 Device Self-Test: Not Supported 00:10:04.036 Directives: Supported 00:10:04.036 NVMe-MI: Not Supported 00:10:04.036 Virtualization Management: Not Supported 00:10:04.036 Doorbell Buffer Config: Supported 00:10:04.036 Get LBA Status Capability: Not Supported 00:10:04.036 Command & Feature Lockdown Capability: Not Supported 00:10:04.036 Abort Command Limit: 4 00:10:04.036 Async Event Request Limit: 4 00:10:04.036 Number of Firmware Slots: N/A 00:10:04.036 Firmware Slot 1 Read-Only: N/A 00:10:04.036 Firmware Activation Without Reset: N/A 00:10:04.036 Multiple Update Detection Support: N/A 00:10:04.036 Firmware Update Granularity: No Information Provided 00:10:04.036 Per-Namespace SMART Log: Yes 00:10:04.036 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.036 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:04.036 Command Effects Log Page: Supported 00:10:04.036 Get Log Page Extended Data: Supported 00:10:04.036 Telemetry Log Pages: Not Supported 00:10:04.036 Persistent Event Log Pages: Not Supported 00:10:04.036 Supported Log Pages Log Page: May Support 00:10:04.036 Commands Supported & Effects Log Page: Not Supported 00:10:04.036 Feature Identifiers & Effects Log Page:May Support 00:10:04.036 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.036 Data Area 4 for Telemetry Log: Not Supported 00:10:04.036 Error Log Page Entries Supported: 1 00:10:04.036 Keep Alive: Not Supported 00:10:04.036 00:10:04.036 NVM Command Set Attributes 00:10:04.036 ========================== 00:10:04.036 Submission Queue Entry Size 00:10:04.036 Max: 64 00:10:04.036 Min: 64 00:10:04.036 Completion Queue Entry Size 00:10:04.036 Max: 16 00:10:04.036 Min: 16 00:10:04.036 Number of Namespaces: 256 00:10:04.036 Compare Command: Supported 00:10:04.036 Write Uncorrectable Command: Not Supported 00:10:04.036 Dataset Management Command: Supported 00:10:04.036 Write Zeroes Command: Supported 00:10:04.036 Set Features Save Field: Supported 00:10:04.036 Reservations: Not Supported 00:10:04.036 Timestamp: Supported 00:10:04.036 Copy: Supported 00:10:04.036 Volatile Write Cache: Present 00:10:04.036 Atomic Write Unit (Normal): 1 00:10:04.036 Atomic Write Unit (PFail): 1 00:10:04.036 Atomic Compare & Write Unit: 1 00:10:04.036 Fused Compare & Write: Not Supported 00:10:04.036 Scatter-Gather List 00:10:04.036 SGL Command Set: Supported 00:10:04.036 SGL Keyed: Not Supported 00:10:04.036 SGL Bit Bucket Descriptor: Not Supported 00:10:04.036 SGL Metadata Pointer: Not Supported 00:10:04.036 Oversized SGL: Not Supported 00:10:04.036 SGL Metadata Address: Not Supported 00:10:04.036 SGL Offset: Not Supported 00:10:04.036 Transport SGL Data Block: Not Supported 00:10:04.036 Replay Protected Memory Block: Not Supported 00:10:04.036 00:10:04.036 Firmware Slot Information 00:10:04.036 ========================= 00:10:04.036 Active slot: 1 00:10:04.036 Slot 1 Firmware Revision: 1.0 00:10:04.036 00:10:04.036 00:10:04.036 Commands Supported and Effects 00:10:04.036 ============================== 00:10:04.036 Admin Commands 00:10:04.036 -------------- 00:10:04.036 Delete I/O Submission Queue (00h): Supported 00:10:04.036 Create I/O Submission Queue (01h): Supported 00:10:04.036 Get Log Page (02h): Supported 00:10:04.036 Delete I/O Completion Queue (04h): Supported 00:10:04.036 Create I/O Completion Queue (05h): Supported 00:10:04.036 Identify (06h): Supported 00:10:04.036 Abort (08h): Supported 00:10:04.036 Set Features (09h): Supported 00:10:04.036 Get Features (0Ah): Supported 00:10:04.036 Asynchronous Event Request (0Ch): Supported 00:10:04.036 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.036 Directive Send (19h): Supported 00:10:04.036 Directive Receive (1Ah): Supported 00:10:04.036 Virtualization Management (1Ch): Supported 00:10:04.036 Doorbell Buffer Config (7Ch): Supported 00:10:04.036 Format NVM (80h): Supported LBA-Change 00:10:04.036 I/O Commands 00:10:04.036 ------------ 00:10:04.036 Flush (00h): Supported LBA-Change 00:10:04.036 Write (01h): Supported LBA-Change 00:10:04.036 Read (02h): Supported 00:10:04.036 Compare (05h): Supported 00:10:04.036 Write Zeroes (08h): Supported LBA-Change 00:10:04.036 Dataset Management (09h): Supported LBA-Change 00:10:04.036 Unknown (0Ch): Supported 00:10:04.036 Unknown (12h): Supported 00:10:04.036 Copy (19h): Supported LBA-Change 00:10:04.036 Unknown (1Dh): Supported LBA-Change 00:10:04.036 00:10:04.036 Error Log 00:10:04.036 ========= 00:10:04.036 00:10:04.036 Arbitration 00:10:04.036 =========== 00:10:04.037 Arbitration Burst: no limit 00:10:04.037 00:10:04.037 Power Management 00:10:04.037 ================ 00:10:04.037 Number of Power States: 1 00:10:04.037 Current Power State: Power State #0 00:10:04.037 Power State #0: 00:10:04.037 Max Power: 25.00 W 00:10:04.037 Non-Operational State: Operational 00:10:04.037 Entry Latency: 16 microseconds 00:10:04.037 Exit Latency: 4 microseconds 00:10:04.037 Relative Read Throughput: 0 00:10:04.037 Relative Read Latency: 0 00:10:04.037 Relative Write Throughput: 0 00:10:04.037 Relative Write Latency: 0 00:10:04.037 Idle Power: Not Reported 00:10:04.037 Active Power: Not Reported 00:10:04.037 Non-Operational Permissive Mode: Not Supported 00:10:04.037 00:10:04.037 Health Information 00:10:04.037 ================== 00:10:04.037 Critical Warnings: 00:10:04.037 Available Spare Space: OK 00:10:04.037 Temperature: [2024-11-15 11:13:41.220618] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64239 terminated unexpected 00:10:04.037 OK 00:10:04.037 Device Reliability: OK 00:10:04.037 Read Only: No 00:10:04.037 Volatile Memory Backup: OK 00:10:04.037 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.037 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.037 Available Spare: 0% 00:10:04.037 Available Spare Threshold: 0% 00:10:04.037 Life Percentage Used: 0% 00:10:04.037 Data Units Read: 1111 00:10:04.037 Data Units Written: 976 00:10:04.037 Host Read Commands: 51128 00:10:04.037 Host Write Commands: 49874 00:10:04.037 Controller Busy Time: 0 minutes 00:10:04.037 Power Cycles: 0 00:10:04.037 Power On Hours: 0 hours 00:10:04.037 Unsafe Shutdowns: 0 00:10:04.037 Unrecoverable Media Errors: 0 00:10:04.037 Lifetime Error Log Entries: 0 00:10:04.037 Warning Temperature Time: 0 minutes 00:10:04.037 Critical Temperature Time: 0 minutes 00:10:04.037 00:10:04.037 Number of Queues 00:10:04.037 ================ 00:10:04.037 Number of I/O Submission Queues: 64 00:10:04.037 Number of I/O Completion Queues: 64 00:10:04.037 00:10:04.037 ZNS Specific Controller Data 00:10:04.037 ============================ 00:10:04.037 Zone Append Size Limit: 0 00:10:04.037 00:10:04.037 00:10:04.037 Active Namespaces 00:10:04.037 ================= 00:10:04.037 Namespace ID:1 00:10:04.037 Error Recovery Timeout: Unlimited 00:10:04.037 Command Set Identifier: NVM (00h) 00:10:04.037 Deallocate: Supported 00:10:04.037 Deallocated/Unwritten Error: Supported 00:10:04.037 Deallocated Read Value: All 0x00 00:10:04.037 Deallocate in Write Zeroes: Not Supported 00:10:04.037 Deallocated Guard Field: 0xFFFF 00:10:04.037 Flush: Supported 00:10:04.037 Reservation: Not Supported 00:10:04.037 Namespace Sharing Capabilities: Private 00:10:04.037 Size (in LBAs): 1310720 (5GiB) 00:10:04.037 Capacity (in LBAs): 1310720 (5GiB) 00:10:04.037 Utilization (in LBAs): 1310720 (5GiB) 00:10:04.037 Thin Provisioning: Not Supported 00:10:04.037 Per-NS Atomic Units: No 00:10:04.037 Maximum Single Source Range Length: 128 00:10:04.037 Maximum Copy Length: 128 00:10:04.037 Maximum Source Range Count: 128 00:10:04.037 NGUID/EUI64 Never Reused: No 00:10:04.037 Namespace Write Protected: No 00:10:04.037 Number of LBA Formats: 8 00:10:04.037 Current LBA Format: LBA Format #04 00:10:04.037 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.037 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.037 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.037 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.037 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.037 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.037 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.037 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.037 00:10:04.037 NVM Specific Namespace Data 00:10:04.037 =========================== 00:10:04.037 Logical Block Storage Tag Mask: 0 00:10:04.037 Protection Information Capabilities: 00:10:04.037 16b Guard Protection Information Storage Tag Support: No 00:10:04.037 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.037 Storage Tag Check Read Support: No 00:10:04.037 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.037 ===================================================== 00:10:04.037 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.037 ===================================================== 00:10:04.037 Controller Capabilities/Features 00:10:04.037 ================================ 00:10:04.037 Vendor ID: 1b36 00:10:04.037 Subsystem Vendor ID: 1af4 00:10:04.037 Serial Number: 12343 00:10:04.037 Model Number: QEMU NVMe Ctrl 00:10:04.037 Firmware Version: 8.0.0 00:10:04.037 Recommended Arb Burst: 6 00:10:04.037 IEEE OUI Identifier: 00 54 52 00:10:04.037 Multi-path I/O 00:10:04.037 May have multiple subsystem ports: No 00:10:04.037 May have multiple controllers: Yes 00:10:04.037 Associated with SR-IOV VF: No 00:10:04.037 Max Data Transfer Size: 524288 00:10:04.037 Max Number of Namespaces: 256 00:10:04.037 Max Number of I/O Queues: 64 00:10:04.037 NVMe Specification Version (VS): 1.4 00:10:04.037 NVMe Specification Version (Identify): 1.4 00:10:04.037 Maximum Queue Entries: 2048 00:10:04.037 Contiguous Queues Required: Yes 00:10:04.037 Arbitration Mechanisms Supported 00:10:04.037 Weighted Round Robin: Not Supported 00:10:04.037 Vendor Specific: Not Supported 00:10:04.037 Reset Timeout: 7500 ms 00:10:04.037 Doorbell Stride: 4 bytes 00:10:04.037 NVM Subsystem Reset: Not Supported 00:10:04.037 Command Sets Supported 00:10:04.037 NVM Command Set: Supported 00:10:04.037 Boot Partition: Not Supported 00:10:04.037 Memory Page Size Minimum: 4096 bytes 00:10:04.037 Memory Page Size Maximum: 65536 bytes 00:10:04.037 Persistent Memory Region: Not Supported 00:10:04.037 Optional Asynchronous Events Supported 00:10:04.037 Namespace Attribute Notices: Supported 00:10:04.037 Firmware Activation Notices: Not Supported 00:10:04.037 ANA Change Notices: Not Supported 00:10:04.037 PLE Aggregate Log Change Notices: Not Supported 00:10:04.037 LBA Status Info Alert Notices: Not Supported 00:10:04.037 EGE Aggregate Log Change Notices: Not Supported 00:10:04.037 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.037 Zone Descriptor Change Notices: Not Supported 00:10:04.037 Discovery Log Change Notices: Not Supported 00:10:04.037 Controller Attributes 00:10:04.037 128-bit Host Identifier: Not Supported 00:10:04.037 Non-Operational Permissive Mode: Not Supported 00:10:04.037 NVM Sets: Not Supported 00:10:04.037 Read Recovery Levels: Not Supported 00:10:04.037 Endurance Groups: Supported 00:10:04.037 Predictable Latency Mode: Not Supported 00:10:04.037 Traffic Based Keep ALive: Not Supported 00:10:04.037 Namespace Granularity: Not Supported 00:10:04.037 SQ Associations: Not Supported 00:10:04.037 UUID List: Not Supported 00:10:04.037 Multi-Domain Subsystem: Not Supported 00:10:04.037 Fixed Capacity Management: Not Supported 00:10:04.037 Variable Capacity Management: Not Supported 00:10:04.037 Delete Endurance Group: Not Supported 00:10:04.037 Delete NVM Set: Not Supported 00:10:04.037 Extended LBA Formats Supported: Supported 00:10:04.037 Flexible Data Placement Supported: Supported 00:10:04.037 00:10:04.037 Controller Memory Buffer Support 00:10:04.037 ================================ 00:10:04.037 Supported: No 00:10:04.037 00:10:04.037 Persistent Memory Region Support 00:10:04.037 ================================ 00:10:04.037 Supported: No 00:10:04.037 00:10:04.037 Admin Command Set Attributes 00:10:04.037 ============================ 00:10:04.037 Security Send/Receive: Not Supported 00:10:04.037 Format NVM: Supported 00:10:04.037 Firmware Activate/Download: Not Supported 00:10:04.037 Namespace Management: Supported 00:10:04.037 Device Self-Test: Not Supported 00:10:04.037 Directives: Supported 00:10:04.037 NVMe-MI: Not Supported 00:10:04.037 Virtualization Management: Not Supported 00:10:04.037 Doorbell Buffer Config: Supported 00:10:04.037 Get LBA Status Capability: Not Supported 00:10:04.037 Command & Feature Lockdown Capability: Not Supported 00:10:04.037 Abort Command Limit: 4 00:10:04.037 Async Event Request Limit: 4 00:10:04.037 Number of Firmware Slots: N/A 00:10:04.037 Firmware Slot 1 Read-Only: N/A 00:10:04.037 Firmware Activation Without Reset: N/A 00:10:04.037 Multiple Update Detection Support: N/A 00:10:04.037 Firmware Update Granularity: No Information Provided 00:10:04.037 Per-Namespace SMART Log: Yes 00:10:04.038 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.038 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:04.038 Command Effects Log Page: Supported 00:10:04.038 Get Log Page Extended Data: Supported 00:10:04.038 Telemetry Log Pages: Not Supported 00:10:04.038 Persistent Event Log Pages: Not Supported 00:10:04.038 Supported Log Pages Log Page: May Support 00:10:04.038 Commands Supported & Effects Log Page: Not Supported 00:10:04.038 Feature Identifiers & Effects Log Page:May Support 00:10:04.038 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.038 Data Area 4 for Telemetry Log: Not Supported 00:10:04.038 Error Log Page Entries Supported: 1 00:10:04.038 Keep Alive: Not Supported 00:10:04.038 00:10:04.038 NVM Command Set Attributes 00:10:04.038 ========================== 00:10:04.038 Submission Queue Entry Size 00:10:04.038 Max: 64 00:10:04.038 Min: 64 00:10:04.038 Completion Queue Entry Size 00:10:04.038 Max: 16 00:10:04.038 Min: 16 00:10:04.038 Number of Namespaces: 256 00:10:04.038 Compare Command: Supported 00:10:04.038 Write Uncorrectable Command: Not Supported 00:10:04.038 Dataset Management Command: Supported 00:10:04.038 Write Zeroes Command: Supported 00:10:04.038 Set Features Save Field: Supported 00:10:04.038 Reservations: Not Supported 00:10:04.038 Timestamp: Supported 00:10:04.038 Copy: Supported 00:10:04.038 Volatile Write Cache: Present 00:10:04.038 Atomic Write Unit (Normal): 1 00:10:04.038 Atomic Write Unit (PFail): 1 00:10:04.038 Atomic Compare & Write Unit: 1 00:10:04.038 Fused Compare & Write: Not Supported 00:10:04.038 Scatter-Gather List 00:10:04.038 SGL Command Set: Supported 00:10:04.038 SGL Keyed: Not Supported 00:10:04.038 SGL Bit Bucket Descriptor: Not Supported 00:10:04.038 SGL Metadata Pointer: Not Supported 00:10:04.038 Oversized SGL: Not Supported 00:10:04.038 SGL Metadata Address: Not Supported 00:10:04.038 SGL Offset: Not Supported 00:10:04.038 Transport SGL Data Block: Not Supported 00:10:04.038 Replay Protected Memory Block: Not Supported 00:10:04.038 00:10:04.038 Firmware Slot Information 00:10:04.038 ========================= 00:10:04.038 Active slot: 1 00:10:04.038 Slot 1 Firmware Revision: 1.0 00:10:04.038 00:10:04.038 00:10:04.038 Commands Supported and Effects 00:10:04.038 ============================== 00:10:04.038 Admin Commands 00:10:04.038 -------------- 00:10:04.038 Delete I/O Submission Queue (00h): Supported 00:10:04.038 Create I/O Submission Queue (01h): Supported 00:10:04.038 Get Log Page (02h): Supported 00:10:04.038 Delete I/O Completion Queue (04h): Supported 00:10:04.038 Create I/O Completion Queue (05h): Supported 00:10:04.038 Identify (06h): Supported 00:10:04.038 Abort (08h): Supported 00:10:04.038 Set Features (09h): Supported 00:10:04.038 Get Features (0Ah): Supported 00:10:04.038 Asynchronous Event Request (0Ch): Supported 00:10:04.038 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.038 Directive Send (19h): Supported 00:10:04.038 Directive Receive (1Ah): Supported 00:10:04.038 Virtualization Management (1Ch): Supported 00:10:04.038 Doorbell Buffer Config (7Ch): Supported 00:10:04.038 Format NVM (80h): Supported LBA-Change 00:10:04.038 I/O Commands 00:10:04.038 ------------ 00:10:04.038 Flush (00h): Supported LBA-Change 00:10:04.038 Write (01h): Supported LBA-Change 00:10:04.038 Read (02h): Supported 00:10:04.038 Compare (05h): Supported 00:10:04.038 Write Zeroes (08h): Supported LBA-Change 00:10:04.038 Dataset Management (09h): Supported LBA-Change 00:10:04.038 Unknown (0Ch): Supported 00:10:04.038 Unknown (12h): Supported 00:10:04.038 Copy (19h): Supported LBA-Change 00:10:04.038 Unknown (1Dh): Supported LBA-Change 00:10:04.038 00:10:04.038 Error Log 00:10:04.038 ========= 00:10:04.038 00:10:04.038 Arbitration 00:10:04.038 =========== 00:10:04.038 Arbitration Burst: no limit 00:10:04.038 00:10:04.038 Power Management 00:10:04.038 ================ 00:10:04.038 Number of Power States: 1 00:10:04.038 Current Power State: Power State #0 00:10:04.038 Power State #0: 00:10:04.038 Max Power: 25.00 W 00:10:04.038 Non-Operational State: Operational 00:10:04.038 Entry Latency: 16 microseconds 00:10:04.038 Exit Latency: 4 microseconds 00:10:04.038 Relative Read Throughput: 0 00:10:04.038 Relative Read Latency: 0 00:10:04.038 Relative Write Throughput: 0 00:10:04.038 Relative Write Latency: 0 00:10:04.038 Idle Power: Not Reported 00:10:04.038 Active Power: Not Reported 00:10:04.038 Non-Operational Permissive Mode: Not Supported 00:10:04.038 00:10:04.038 Health Information 00:10:04.038 ================== 00:10:04.038 Critical Warnings: 00:10:04.038 Available Spare Space: OK 00:10:04.038 Temperature: OK 00:10:04.038 Device Reliability: OK 00:10:04.038 Read Only: No 00:10:04.038 Volatile Memory Backup: OK 00:10:04.038 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.038 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.038 Available Spare: 0% 00:10:04.038 Available Spare Threshold: 0% 00:10:04.038 Life Percentage Used: 0% 00:10:04.038 Data Units Read: 838 00:10:04.038 Data Units Written: 767 00:10:04.038 Host Read Commands: 35961 00:10:04.038 Host Write Commands: 35384 00:10:04.038 Controller Busy Time: 0 minutes 00:10:04.038 Power Cycles: 0 00:10:04.038 Power On Hours: 0 hours 00:10:04.038 Unsafe Shutdowns: 0 00:10:04.038 Unrecoverable Media Errors: 0 00:10:04.038 Lifetime Error Log Entries: 0 00:10:04.038 Warning Temperature Time: 0 minutes 00:10:04.038 Critical Temperature Time: 0 minutes 00:10:04.038 00:10:04.038 Number of Queues 00:10:04.038 ================ 00:10:04.038 Number of I/O Submission Queues: 64 00:10:04.038 Number of I/O Completion Queues: 64 00:10:04.038 00:10:04.038 ZNS Specific Controller Data 00:10:04.038 ============================ 00:10:04.038 Zone Append Size Limit: 0 00:10:04.038 00:10:04.038 00:10:04.038 Active Namespaces 00:10:04.038 ================= 00:10:04.038 Namespace ID:1 00:10:04.038 Error Recovery Timeout: Unlimited 00:10:04.038 Command Set Identifier: NVM (00h) 00:10:04.038 Deallocate: Supported 00:10:04.038 Deallocated/Unwritten Error: Supported 00:10:04.038 Deallocated Read Value: All 0x00 00:10:04.038 Deallocate in Write Zeroes: Not Supported 00:10:04.038 Deallocated Guard Field: 0xFFFF 00:10:04.038 Flush: Supported 00:10:04.038 Reservation: Not Supported 00:10:04.038 Namespace Sharing Capabilities: Multiple Controllers 00:10:04.038 Size (in LBAs): 262144 (1GiB) 00:10:04.038 Capacity (in LBAs): 262144 (1GiB) 00:10:04.038 Utilization (in LBAs): 262144 (1GiB) 00:10:04.038 Thin Provisioning: Not Supported 00:10:04.038 Per-NS Atomic Units: No 00:10:04.038 Maximum Single Source Range Length: 128 00:10:04.038 Maximum Copy Length: 128 00:10:04.038 Maximum Source Range Count: 128 00:10:04.038 NGUID/EUI64 Never Reused: No 00:10:04.038 Namespace Write Protected: No 00:10:04.038 Endurance group ID: 1 00:10:04.038 Number of LBA Formats: 8 00:10:04.038 Current LBA Format: LBA Format #04 00:10:04.038 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.038 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.038 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.038 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.038 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.038 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.038 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.038 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.038 00:10:04.038 Get Feature FDP: 00:10:04.038 ================ 00:10:04.038 Enabled: Yes 00:10:04.038 FDP configuration index: 0 00:10:04.038 00:10:04.038 FDP configurations log page 00:10:04.038 =========================== 00:10:04.038 Number of FDP configurations: 1 00:10:04.038 Version: 0 00:10:04.038 Size: 112 00:10:04.038 FDP Configuration Descriptor: 0 00:10:04.038 Descriptor Size: 96 00:10:04.038 Reclaim Group Identifier format: 2 00:10:04.038 FDP Volatile Write Cache: Not Present 00:10:04.038 FDP Configuration: Valid 00:10:04.038 Vendor Specific Size: 0 00:10:04.038 Number of Reclaim Groups: 2 00:10:04.038 Number of Recalim Unit Handles: 8 00:10:04.038 Max Placement Identifiers: 128 00:10:04.038 Number of Namespaces Suppprted: 256 00:10:04.038 Reclaim unit Nominal Size: 6000000 bytes 00:10:04.038 Estimated Reclaim Unit Time Limit: Not Reported 00:10:04.038 RUH Desc #000: RUH Type: Initially Isolated 00:10:04.038 RUH Desc #001: RUH Type: Initially Isolated 00:10:04.038 RUH Desc #002: RUH Type: Initially Isolated 00:10:04.038 RUH Desc #003: RUH Type: Initially Isolated 00:10:04.038 RUH Desc #004: RUH Type: Initially Isolated 00:10:04.038 RUH Desc #005: RUH Type: Initially Isolated 00:10:04.038 RUH Desc #006: RUH Type: Initially Isolated 00:10:04.039 RUH Desc #007: RUH Type: Initially Isolated 00:10:04.039 00:10:04.039 FDP reclaim unit handle usage log page 00:10:04.039 ====================================== 00:10:04.039 Number of Reclaim Unit Handles: 8 00:10:04.039 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:04.039 RUH Usage Desc #001: RUH Attributes: Unused 00:10:04.039 RUH Usage Desc #002: RUH Attributes: Unused 00:10:04.039 RUH Usage Desc #003: RUH Attributes: Unused 00:10:04.039 RUH Usage Desc #004: RUH Attributes: Unused 00:10:04.039 RUH Usage Desc #005: RUH Attributes: Unused 00:10:04.039 RUH Usage Desc #006: RUH Attributes: Unused 00:10:04.039 RUH Usage Desc #007: RUH Attributes: Unused 00:10:04.039 00:10:04.039 FDP statistics log page 00:10:04.039 ======================= 00:10:04.039 Host bytes with metadata written: 495296512 00:10:04.039 Medi[2024-11-15 11:13:41.222897] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64239 terminated unexpected 00:10:04.039 a bytes with metadata written: 495349760 00:10:04.039 Media bytes erased: 0 00:10:04.039 00:10:04.039 FDP events log page 00:10:04.039 =================== 00:10:04.039 Number of FDP events: 0 00:10:04.039 00:10:04.039 NVM Specific Namespace Data 00:10:04.039 =========================== 00:10:04.039 Logical Block Storage Tag Mask: 0 00:10:04.039 Protection Information Capabilities: 00:10:04.039 16b Guard Protection Information Storage Tag Support: No 00:10:04.039 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.039 Storage Tag Check Read Support: No 00:10:04.039 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.039 ===================================================== 00:10:04.039 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.039 ===================================================== 00:10:04.039 Controller Capabilities/Features 00:10:04.039 ================================ 00:10:04.039 Vendor ID: 1b36 00:10:04.039 Subsystem Vendor ID: 1af4 00:10:04.039 Serial Number: 12342 00:10:04.039 Model Number: QEMU NVMe Ctrl 00:10:04.039 Firmware Version: 8.0.0 00:10:04.039 Recommended Arb Burst: 6 00:10:04.039 IEEE OUI Identifier: 00 54 52 00:10:04.039 Multi-path I/O 00:10:04.039 May have multiple subsystem ports: No 00:10:04.039 May have multiple controllers: No 00:10:04.039 Associated with SR-IOV VF: No 00:10:04.039 Max Data Transfer Size: 524288 00:10:04.039 Max Number of Namespaces: 256 00:10:04.039 Max Number of I/O Queues: 64 00:10:04.039 NVMe Specification Version (VS): 1.4 00:10:04.039 NVMe Specification Version (Identify): 1.4 00:10:04.039 Maximum Queue Entries: 2048 00:10:04.039 Contiguous Queues Required: Yes 00:10:04.039 Arbitration Mechanisms Supported 00:10:04.039 Weighted Round Robin: Not Supported 00:10:04.039 Vendor Specific: Not Supported 00:10:04.039 Reset Timeout: 7500 ms 00:10:04.039 Doorbell Stride: 4 bytes 00:10:04.039 NVM Subsystem Reset: Not Supported 00:10:04.039 Command Sets Supported 00:10:04.039 NVM Command Set: Supported 00:10:04.039 Boot Partition: Not Supported 00:10:04.039 Memory Page Size Minimum: 4096 bytes 00:10:04.039 Memory Page Size Maximum: 65536 bytes 00:10:04.039 Persistent Memory Region: Not Supported 00:10:04.039 Optional Asynchronous Events Supported 00:10:04.039 Namespace Attribute Notices: Supported 00:10:04.039 Firmware Activation Notices: Not Supported 00:10:04.039 ANA Change Notices: Not Supported 00:10:04.039 PLE Aggregate Log Change Notices: Not Supported 00:10:04.039 LBA Status Info Alert Notices: Not Supported 00:10:04.039 EGE Aggregate Log Change Notices: Not Supported 00:10:04.039 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.039 Zone Descriptor Change Notices: Not Supported 00:10:04.039 Discovery Log Change Notices: Not Supported 00:10:04.039 Controller Attributes 00:10:04.039 128-bit Host Identifier: Not Supported 00:10:04.039 Non-Operational Permissive Mode: Not Supported 00:10:04.039 NVM Sets: Not Supported 00:10:04.039 Read Recovery Levels: Not Supported 00:10:04.039 Endurance Groups: Not Supported 00:10:04.039 Predictable Latency Mode: Not Supported 00:10:04.039 Traffic Based Keep ALive: Not Supported 00:10:04.039 Namespace Granularity: Not Supported 00:10:04.039 SQ Associations: Not Supported 00:10:04.039 UUID List: Not Supported 00:10:04.039 Multi-Domain Subsystem: Not Supported 00:10:04.039 Fixed Capacity Management: Not Supported 00:10:04.039 Variable Capacity Management: Not Supported 00:10:04.039 Delete Endurance Group: Not Supported 00:10:04.039 Delete NVM Set: Not Supported 00:10:04.039 Extended LBA Formats Supported: Supported 00:10:04.039 Flexible Data Placement Supported: Not Supported 00:10:04.039 00:10:04.039 Controller Memory Buffer Support 00:10:04.039 ================================ 00:10:04.039 Supported: No 00:10:04.039 00:10:04.039 Persistent Memory Region Support 00:10:04.039 ================================ 00:10:04.039 Supported: No 00:10:04.039 00:10:04.039 Admin Command Set Attributes 00:10:04.039 ============================ 00:10:04.039 Security Send/Receive: Not Supported 00:10:04.039 Format NVM: Supported 00:10:04.039 Firmware Activate/Download: Not Supported 00:10:04.039 Namespace Management: Supported 00:10:04.039 Device Self-Test: Not Supported 00:10:04.039 Directives: Supported 00:10:04.039 NVMe-MI: Not Supported 00:10:04.039 Virtualization Management: Not Supported 00:10:04.039 Doorbell Buffer Config: Supported 00:10:04.039 Get LBA Status Capability: Not Supported 00:10:04.039 Command & Feature Lockdown Capability: Not Supported 00:10:04.039 Abort Command Limit: 4 00:10:04.039 Async Event Request Limit: 4 00:10:04.039 Number of Firmware Slots: N/A 00:10:04.039 Firmware Slot 1 Read-Only: N/A 00:10:04.039 Firmware Activation Without Reset: N/A 00:10:04.039 Multiple Update Detection Support: N/A 00:10:04.039 Firmware Update Granularity: No Information Provided 00:10:04.039 Per-Namespace SMART Log: Yes 00:10:04.039 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.039 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:04.039 Command Effects Log Page: Supported 00:10:04.039 Get Log Page Extended Data: Supported 00:10:04.039 Telemetry Log Pages: Not Supported 00:10:04.039 Persistent Event Log Pages: Not Supported 00:10:04.039 Supported Log Pages Log Page: May Support 00:10:04.039 Commands Supported & Effects Log Page: Not Supported 00:10:04.039 Feature Identifiers & Effects Log Page:May Support 00:10:04.039 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.039 Data Area 4 for Telemetry Log: Not Supported 00:10:04.039 Error Log Page Entries Supported: 1 00:10:04.039 Keep Alive: Not Supported 00:10:04.039 00:10:04.039 NVM Command Set Attributes 00:10:04.039 ========================== 00:10:04.039 Submission Queue Entry Size 00:10:04.039 Max: 64 00:10:04.039 Min: 64 00:10:04.039 Completion Queue Entry Size 00:10:04.039 Max: 16 00:10:04.039 Min: 16 00:10:04.039 Number of Namespaces: 256 00:10:04.039 Compare Command: Supported 00:10:04.039 Write Uncorrectable Command: Not Supported 00:10:04.039 Dataset Management Command: Supported 00:10:04.039 Write Zeroes Command: Supported 00:10:04.039 Set Features Save Field: Supported 00:10:04.039 Reservations: Not Supported 00:10:04.040 Timestamp: Supported 00:10:04.040 Copy: Supported 00:10:04.040 Volatile Write Cache: Present 00:10:04.040 Atomic Write Unit (Normal): 1 00:10:04.040 Atomic Write Unit (PFail): 1 00:10:04.040 Atomic Compare & Write Unit: 1 00:10:04.040 Fused Compare & Write: Not Supported 00:10:04.040 Scatter-Gather List 00:10:04.040 SGL Command Set: Supported 00:10:04.040 SGL Keyed: Not Supported 00:10:04.040 SGL Bit Bucket Descriptor: Not Supported 00:10:04.040 SGL Metadata Pointer: Not Supported 00:10:04.040 Oversized SGL: Not Supported 00:10:04.040 SGL Metadata Address: Not Supported 00:10:04.040 SGL Offset: Not Supported 00:10:04.040 Transport SGL Data Block: Not Supported 00:10:04.040 Replay Protected Memory Block: Not Supported 00:10:04.040 00:10:04.040 Firmware Slot Information 00:10:04.040 ========================= 00:10:04.040 Active slot: 1 00:10:04.040 Slot 1 Firmware Revision: 1.0 00:10:04.040 00:10:04.040 00:10:04.040 Commands Supported and Effects 00:10:04.040 ============================== 00:10:04.040 Admin Commands 00:10:04.040 -------------- 00:10:04.040 Delete I/O Submission Queue (00h): Supported 00:10:04.040 Create I/O Submission Queue (01h): Supported 00:10:04.040 Get Log Page (02h): Supported 00:10:04.040 Delete I/O Completion Queue (04h): Supported 00:10:04.040 Create I/O Completion Queue (05h): Supported 00:10:04.040 Identify (06h): Supported 00:10:04.040 Abort (08h): Supported 00:10:04.040 Set Features (09h): Supported 00:10:04.040 Get Features (0Ah): Supported 00:10:04.040 Asynchronous Event Request (0Ch): Supported 00:10:04.040 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.040 Directive Send (19h): Supported 00:10:04.040 Directive Receive (1Ah): Supported 00:10:04.040 Virtualization Management (1Ch): Supported 00:10:04.040 Doorbell Buffer Config (7Ch): Supported 00:10:04.040 Format NVM (80h): Supported LBA-Change 00:10:04.040 I/O Commands 00:10:04.040 ------------ 00:10:04.040 Flush (00h): Supported LBA-Change 00:10:04.040 Write (01h): Supported LBA-Change 00:10:04.040 Read (02h): Supported 00:10:04.040 Compare (05h): Supported 00:10:04.040 Write Zeroes (08h): Supported LBA-Change 00:10:04.040 Dataset Management (09h): Supported LBA-Change 00:10:04.040 Unknown (0Ch): Supported 00:10:04.040 Unknown (12h): Supported 00:10:04.040 Copy (19h): Supported LBA-Change 00:10:04.040 Unknown (1Dh): Supported LBA-Change 00:10:04.040 00:10:04.040 Error Log 00:10:04.040 ========= 00:10:04.040 00:10:04.040 Arbitration 00:10:04.040 =========== 00:10:04.040 Arbitration Burst: no limit 00:10:04.040 00:10:04.040 Power Management 00:10:04.040 ================ 00:10:04.040 Number of Power States: 1 00:10:04.040 Current Power State: Power State #0 00:10:04.040 Power State #0: 00:10:04.040 Max Power: 25.00 W 00:10:04.040 Non-Operational State: Operational 00:10:04.040 Entry Latency: 16 microseconds 00:10:04.040 Exit Latency: 4 microseconds 00:10:04.040 Relative Read Throughput: 0 00:10:04.040 Relative Read Latency: 0 00:10:04.040 Relative Write Throughput: 0 00:10:04.040 Relative Write Latency: 0 00:10:04.040 Idle Power: Not Reported 00:10:04.040 Active Power: Not Reported 00:10:04.040 Non-Operational Permissive Mode: Not Supported 00:10:04.040 00:10:04.040 Health Information 00:10:04.040 ================== 00:10:04.040 Critical Warnings: 00:10:04.040 Available Spare Space: OK 00:10:04.040 Temperature: OK 00:10:04.040 Device Reliability: OK 00:10:04.040 Read Only: No 00:10:04.040 Volatile Memory Backup: OK 00:10:04.040 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.040 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.040 Available Spare: 0% 00:10:04.040 Available Spare Threshold: 0% 00:10:04.040 Life Percentage Used: 0% 00:10:04.040 Data Units Read: 2325 00:10:04.040 Data Units Written: 2113 00:10:04.040 Host Read Commands: 106428 00:10:04.040 Host Write Commands: 104697 00:10:04.040 Controller Busy Time: 0 minutes 00:10:04.040 Power Cycles: 0 00:10:04.040 Power On Hours: 0 hours 00:10:04.040 Unsafe Shutdowns: 0 00:10:04.040 Unrecoverable Media Errors: 0 00:10:04.040 Lifetime Error Log Entries: 0 00:10:04.040 Warning Temperature Time: 0 minutes 00:10:04.040 Critical Temperature Time: 0 minutes 00:10:04.040 00:10:04.040 Number of Queues 00:10:04.040 ================ 00:10:04.040 Number of I/O Submission Queues: 64 00:10:04.040 Number of I/O Completion Queues: 64 00:10:04.040 00:10:04.040 ZNS Specific Controller Data 00:10:04.040 ============================ 00:10:04.040 Zone Append Size Limit: 0 00:10:04.040 00:10:04.040 00:10:04.040 Active Namespaces 00:10:04.040 ================= 00:10:04.040 Namespace ID:1 00:10:04.040 Error Recovery Timeout: Unlimited 00:10:04.040 Command Set Identifier: NVM (00h) 00:10:04.040 Deallocate: Supported 00:10:04.040 Deallocated/Unwritten Error: Supported 00:10:04.040 Deallocated Read Value: All 0x00 00:10:04.040 Deallocate in Write Zeroes: Not Supported 00:10:04.040 Deallocated Guard Field: 0xFFFF 00:10:04.040 Flush: Supported 00:10:04.040 Reservation: Not Supported 00:10:04.040 Namespace Sharing Capabilities: Private 00:10:04.040 Size (in LBAs): 1048576 (4GiB) 00:10:04.040 Capacity (in LBAs): 1048576 (4GiB) 00:10:04.040 Utilization (in LBAs): 1048576 (4GiB) 00:10:04.040 Thin Provisioning: Not Supported 00:10:04.040 Per-NS Atomic Units: No 00:10:04.040 Maximum Single Source Range Length: 128 00:10:04.040 Maximum Copy Length: 128 00:10:04.040 Maximum Source Range Count: 128 00:10:04.040 NGUID/EUI64 Never Reused: No 00:10:04.040 Namespace Write Protected: No 00:10:04.040 Number of LBA Formats: 8 00:10:04.040 Current LBA Format: LBA Format #04 00:10:04.040 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.040 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.040 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.040 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.040 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.040 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.040 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.040 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.040 00:10:04.040 NVM Specific Namespace Data 00:10:04.040 =========================== 00:10:04.040 Logical Block Storage Tag Mask: 0 00:10:04.040 Protection Information Capabilities: 00:10:04.040 16b Guard Protection Information Storage Tag Support: No 00:10:04.040 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.040 Storage Tag Check Read Support: No 00:10:04.040 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.040 Namespace ID:2 00:10:04.040 Error Recovery Timeout: Unlimited 00:10:04.040 Command Set Identifier: NVM (00h) 00:10:04.040 Deallocate: Supported 00:10:04.040 Deallocated/Unwritten Error: Supported 00:10:04.040 Deallocated Read Value: All 0x00 00:10:04.040 Deallocate in Write Zeroes: Not Supported 00:10:04.040 Deallocated Guard Field: 0xFFFF 00:10:04.040 Flush: Supported 00:10:04.040 Reservation: Not Supported 00:10:04.040 Namespace Sharing Capabilities: Private 00:10:04.040 Size (in LBAs): 1048576 (4GiB) 00:10:04.040 Capacity (in LBAs): 1048576 (4GiB) 00:10:04.040 Utilization (in LBAs): 1048576 (4GiB) 00:10:04.040 Thin Provisioning: Not Supported 00:10:04.040 Per-NS Atomic Units: No 00:10:04.040 Maximum Single Source Range Length: 128 00:10:04.040 Maximum Copy Length: 128 00:10:04.040 Maximum Source Range Count: 128 00:10:04.040 NGUID/EUI64 Never Reused: No 00:10:04.040 Namespace Write Protected: No 00:10:04.040 Number of LBA Formats: 8 00:10:04.040 Current LBA Format: LBA Format #04 00:10:04.040 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.040 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.040 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.040 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.040 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.040 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.040 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.040 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.040 00:10:04.040 NVM Specific Namespace Data 00:10:04.040 =========================== 00:10:04.040 Logical Block Storage Tag Mask: 0 00:10:04.041 Protection Information Capabilities: 00:10:04.041 16b Guard Protection Information Storage Tag Support: No 00:10:04.041 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.041 Storage Tag Check Read Support: No 00:10:04.041 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Namespace ID:3 00:10:04.041 Error Recovery Timeout: Unlimited 00:10:04.041 Command Set Identifier: NVM (00h) 00:10:04.041 Deallocate: Supported 00:10:04.041 Deallocated/Unwritten Error: Supported 00:10:04.041 Deallocated Read Value: All 0x00 00:10:04.041 Deallocate in Write Zeroes: Not Supported 00:10:04.041 Deallocated Guard Field: 0xFFFF 00:10:04.041 Flush: Supported 00:10:04.041 Reservation: Not Supported 00:10:04.041 Namespace Sharing Capabilities: Private 00:10:04.041 Size (in LBAs): 1048576 (4GiB) 00:10:04.041 Capacity (in LBAs): 1048576 (4GiB) 00:10:04.041 Utilization (in LBAs): 1048576 (4GiB) 00:10:04.041 Thin Provisioning: Not Supported 00:10:04.041 Per-NS Atomic Units: No 00:10:04.041 Maximum Single Source Range Length: 128 00:10:04.041 Maximum Copy Length: 128 00:10:04.041 Maximum Source Range Count: 128 00:10:04.041 NGUID/EUI64 Never Reused: No 00:10:04.041 Namespace Write Protected: No 00:10:04.041 Number of LBA Formats: 8 00:10:04.041 Current LBA Format: LBA Format #04 00:10:04.041 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.041 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.041 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.041 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.041 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.041 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.041 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.041 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.041 00:10:04.041 NVM Specific Namespace Data 00:10:04.041 =========================== 00:10:04.041 Logical Block Storage Tag Mask: 0 00:10:04.041 Protection Information Capabilities: 00:10:04.041 16b Guard Protection Information Storage Tag Support: No 00:10:04.041 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.041 Storage Tag Check Read Support: No 00:10:04.041 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.041 11:13:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:04.041 11:13:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:04.301 ===================================================== 00:10:04.301 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.301 ===================================================== 00:10:04.301 Controller Capabilities/Features 00:10:04.301 ================================ 00:10:04.301 Vendor ID: 1b36 00:10:04.301 Subsystem Vendor ID: 1af4 00:10:04.301 Serial Number: 12340 00:10:04.301 Model Number: QEMU NVMe Ctrl 00:10:04.301 Firmware Version: 8.0.0 00:10:04.301 Recommended Arb Burst: 6 00:10:04.301 IEEE OUI Identifier: 00 54 52 00:10:04.301 Multi-path I/O 00:10:04.301 May have multiple subsystem ports: No 00:10:04.301 May have multiple controllers: No 00:10:04.301 Associated with SR-IOV VF: No 00:10:04.301 Max Data Transfer Size: 524288 00:10:04.301 Max Number of Namespaces: 256 00:10:04.301 Max Number of I/O Queues: 64 00:10:04.301 NVMe Specification Version (VS): 1.4 00:10:04.301 NVMe Specification Version (Identify): 1.4 00:10:04.301 Maximum Queue Entries: 2048 00:10:04.301 Contiguous Queues Required: Yes 00:10:04.301 Arbitration Mechanisms Supported 00:10:04.301 Weighted Round Robin: Not Supported 00:10:04.301 Vendor Specific: Not Supported 00:10:04.301 Reset Timeout: 7500 ms 00:10:04.301 Doorbell Stride: 4 bytes 00:10:04.301 NVM Subsystem Reset: Not Supported 00:10:04.301 Command Sets Supported 00:10:04.301 NVM Command Set: Supported 00:10:04.301 Boot Partition: Not Supported 00:10:04.301 Memory Page Size Minimum: 4096 bytes 00:10:04.301 Memory Page Size Maximum: 65536 bytes 00:10:04.301 Persistent Memory Region: Not Supported 00:10:04.301 Optional Asynchronous Events Supported 00:10:04.301 Namespace Attribute Notices: Supported 00:10:04.301 Firmware Activation Notices: Not Supported 00:10:04.301 ANA Change Notices: Not Supported 00:10:04.301 PLE Aggregate Log Change Notices: Not Supported 00:10:04.301 LBA Status Info Alert Notices: Not Supported 00:10:04.301 EGE Aggregate Log Change Notices: Not Supported 00:10:04.301 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.301 Zone Descriptor Change Notices: Not Supported 00:10:04.301 Discovery Log Change Notices: Not Supported 00:10:04.301 Controller Attributes 00:10:04.301 128-bit Host Identifier: Not Supported 00:10:04.301 Non-Operational Permissive Mode: Not Supported 00:10:04.301 NVM Sets: Not Supported 00:10:04.301 Read Recovery Levels: Not Supported 00:10:04.301 Endurance Groups: Not Supported 00:10:04.301 Predictable Latency Mode: Not Supported 00:10:04.301 Traffic Based Keep ALive: Not Supported 00:10:04.301 Namespace Granularity: Not Supported 00:10:04.301 SQ Associations: Not Supported 00:10:04.301 UUID List: Not Supported 00:10:04.301 Multi-Domain Subsystem: Not Supported 00:10:04.301 Fixed Capacity Management: Not Supported 00:10:04.301 Variable Capacity Management: Not Supported 00:10:04.301 Delete Endurance Group: Not Supported 00:10:04.301 Delete NVM Set: Not Supported 00:10:04.301 Extended LBA Formats Supported: Supported 00:10:04.301 Flexible Data Placement Supported: Not Supported 00:10:04.301 00:10:04.301 Controller Memory Buffer Support 00:10:04.301 ================================ 00:10:04.301 Supported: No 00:10:04.301 00:10:04.301 Persistent Memory Region Support 00:10:04.301 ================================ 00:10:04.301 Supported: No 00:10:04.301 00:10:04.301 Admin Command Set Attributes 00:10:04.301 ============================ 00:10:04.301 Security Send/Receive: Not Supported 00:10:04.301 Format NVM: Supported 00:10:04.301 Firmware Activate/Download: Not Supported 00:10:04.301 Namespace Management: Supported 00:10:04.301 Device Self-Test: Not Supported 00:10:04.301 Directives: Supported 00:10:04.301 NVMe-MI: Not Supported 00:10:04.301 Virtualization Management: Not Supported 00:10:04.301 Doorbell Buffer Config: Supported 00:10:04.301 Get LBA Status Capability: Not Supported 00:10:04.301 Command & Feature Lockdown Capability: Not Supported 00:10:04.301 Abort Command Limit: 4 00:10:04.301 Async Event Request Limit: 4 00:10:04.301 Number of Firmware Slots: N/A 00:10:04.301 Firmware Slot 1 Read-Only: N/A 00:10:04.301 Firmware Activation Without Reset: N/A 00:10:04.301 Multiple Update Detection Support: N/A 00:10:04.301 Firmware Update Granularity: No Information Provided 00:10:04.301 Per-Namespace SMART Log: Yes 00:10:04.301 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.301 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:04.301 Command Effects Log Page: Supported 00:10:04.301 Get Log Page Extended Data: Supported 00:10:04.301 Telemetry Log Pages: Not Supported 00:10:04.301 Persistent Event Log Pages: Not Supported 00:10:04.301 Supported Log Pages Log Page: May Support 00:10:04.301 Commands Supported & Effects Log Page: Not Supported 00:10:04.301 Feature Identifiers & Effects Log Page:May Support 00:10:04.301 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.302 Data Area 4 for Telemetry Log: Not Supported 00:10:04.302 Error Log Page Entries Supported: 1 00:10:04.302 Keep Alive: Not Supported 00:10:04.302 00:10:04.302 NVM Command Set Attributes 00:10:04.302 ========================== 00:10:04.302 Submission Queue Entry Size 00:10:04.302 Max: 64 00:10:04.302 Min: 64 00:10:04.302 Completion Queue Entry Size 00:10:04.302 Max: 16 00:10:04.302 Min: 16 00:10:04.302 Number of Namespaces: 256 00:10:04.302 Compare Command: Supported 00:10:04.302 Write Uncorrectable Command: Not Supported 00:10:04.302 Dataset Management Command: Supported 00:10:04.302 Write Zeroes Command: Supported 00:10:04.302 Set Features Save Field: Supported 00:10:04.302 Reservations: Not Supported 00:10:04.302 Timestamp: Supported 00:10:04.302 Copy: Supported 00:10:04.302 Volatile Write Cache: Present 00:10:04.302 Atomic Write Unit (Normal): 1 00:10:04.302 Atomic Write Unit (PFail): 1 00:10:04.302 Atomic Compare & Write Unit: 1 00:10:04.302 Fused Compare & Write: Not Supported 00:10:04.302 Scatter-Gather List 00:10:04.302 SGL Command Set: Supported 00:10:04.302 SGL Keyed: Not Supported 00:10:04.302 SGL Bit Bucket Descriptor: Not Supported 00:10:04.302 SGL Metadata Pointer: Not Supported 00:10:04.302 Oversized SGL: Not Supported 00:10:04.302 SGL Metadata Address: Not Supported 00:10:04.302 SGL Offset: Not Supported 00:10:04.302 Transport SGL Data Block: Not Supported 00:10:04.302 Replay Protected Memory Block: Not Supported 00:10:04.302 00:10:04.302 Firmware Slot Information 00:10:04.302 ========================= 00:10:04.302 Active slot: 1 00:10:04.302 Slot 1 Firmware Revision: 1.0 00:10:04.302 00:10:04.302 00:10:04.302 Commands Supported and Effects 00:10:04.302 ============================== 00:10:04.302 Admin Commands 00:10:04.302 -------------- 00:10:04.302 Delete I/O Submission Queue (00h): Supported 00:10:04.302 Create I/O Submission Queue (01h): Supported 00:10:04.302 Get Log Page (02h): Supported 00:10:04.302 Delete I/O Completion Queue (04h): Supported 00:10:04.302 Create I/O Completion Queue (05h): Supported 00:10:04.302 Identify (06h): Supported 00:10:04.302 Abort (08h): Supported 00:10:04.302 Set Features (09h): Supported 00:10:04.302 Get Features (0Ah): Supported 00:10:04.302 Asynchronous Event Request (0Ch): Supported 00:10:04.302 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.302 Directive Send (19h): Supported 00:10:04.302 Directive Receive (1Ah): Supported 00:10:04.302 Virtualization Management (1Ch): Supported 00:10:04.302 Doorbell Buffer Config (7Ch): Supported 00:10:04.302 Format NVM (80h): Supported LBA-Change 00:10:04.302 I/O Commands 00:10:04.302 ------------ 00:10:04.302 Flush (00h): Supported LBA-Change 00:10:04.302 Write (01h): Supported LBA-Change 00:10:04.302 Read (02h): Supported 00:10:04.302 Compare (05h): Supported 00:10:04.302 Write Zeroes (08h): Supported LBA-Change 00:10:04.302 Dataset Management (09h): Supported LBA-Change 00:10:04.302 Unknown (0Ch): Supported 00:10:04.302 Unknown (12h): Supported 00:10:04.302 Copy (19h): Supported LBA-Change 00:10:04.302 Unknown (1Dh): Supported LBA-Change 00:10:04.302 00:10:04.302 Error Log 00:10:04.302 ========= 00:10:04.302 00:10:04.302 Arbitration 00:10:04.302 =========== 00:10:04.302 Arbitration Burst: no limit 00:10:04.302 00:10:04.302 Power Management 00:10:04.302 ================ 00:10:04.302 Number of Power States: 1 00:10:04.302 Current Power State: Power State #0 00:10:04.302 Power State #0: 00:10:04.302 Max Power: 25.00 W 00:10:04.302 Non-Operational State: Operational 00:10:04.302 Entry Latency: 16 microseconds 00:10:04.302 Exit Latency: 4 microseconds 00:10:04.302 Relative Read Throughput: 0 00:10:04.302 Relative Read Latency: 0 00:10:04.302 Relative Write Throughput: 0 00:10:04.302 Relative Write Latency: 0 00:10:04.302 Idle Power: Not Reported 00:10:04.302 Active Power: Not Reported 00:10:04.302 Non-Operational Permissive Mode: Not Supported 00:10:04.302 00:10:04.302 Health Information 00:10:04.302 ================== 00:10:04.302 Critical Warnings: 00:10:04.302 Available Spare Space: OK 00:10:04.302 Temperature: OK 00:10:04.302 Device Reliability: OK 00:10:04.302 Read Only: No 00:10:04.302 Volatile Memory Backup: OK 00:10:04.302 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.302 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.302 Available Spare: 0% 00:10:04.302 Available Spare Threshold: 0% 00:10:04.302 Life Percentage Used: 0% 00:10:04.302 Data Units Read: 730 00:10:04.302 Data Units Written: 658 00:10:04.302 Host Read Commands: 34821 00:10:04.302 Host Write Commands: 34607 00:10:04.302 Controller Busy Time: 0 minutes 00:10:04.302 Power Cycles: 0 00:10:04.302 Power On Hours: 0 hours 00:10:04.302 Unsafe Shutdowns: 0 00:10:04.302 Unrecoverable Media Errors: 0 00:10:04.302 Lifetime Error Log Entries: 0 00:10:04.302 Warning Temperature Time: 0 minutes 00:10:04.302 Critical Temperature Time: 0 minutes 00:10:04.302 00:10:04.302 Number of Queues 00:10:04.302 ================ 00:10:04.302 Number of I/O Submission Queues: 64 00:10:04.302 Number of I/O Completion Queues: 64 00:10:04.302 00:10:04.302 ZNS Specific Controller Data 00:10:04.302 ============================ 00:10:04.302 Zone Append Size Limit: 0 00:10:04.302 00:10:04.302 00:10:04.302 Active Namespaces 00:10:04.302 ================= 00:10:04.302 Namespace ID:1 00:10:04.302 Error Recovery Timeout: Unlimited 00:10:04.302 Command Set Identifier: NVM (00h) 00:10:04.302 Deallocate: Supported 00:10:04.302 Deallocated/Unwritten Error: Supported 00:10:04.302 Deallocated Read Value: All 0x00 00:10:04.302 Deallocate in Write Zeroes: Not Supported 00:10:04.302 Deallocated Guard Field: 0xFFFF 00:10:04.302 Flush: Supported 00:10:04.302 Reservation: Not Supported 00:10:04.302 Metadata Transferred as: Separate Metadata Buffer 00:10:04.302 Namespace Sharing Capabilities: Private 00:10:04.302 Size (in LBAs): 1548666 (5GiB) 00:10:04.302 Capacity (in LBAs): 1548666 (5GiB) 00:10:04.302 Utilization (in LBAs): 1548666 (5GiB) 00:10:04.302 Thin Provisioning: Not Supported 00:10:04.302 Per-NS Atomic Units: No 00:10:04.302 Maximum Single Source Range Length: 128 00:10:04.302 Maximum Copy Length: 128 00:10:04.302 Maximum Source Range Count: 128 00:10:04.302 NGUID/EUI64 Never Reused: No 00:10:04.302 Namespace Write Protected: No 00:10:04.302 Number of LBA Formats: 8 00:10:04.302 Current LBA Format: LBA Format #07 00:10:04.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.302 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.302 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.302 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.302 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.302 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.302 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.302 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.302 00:10:04.302 NVM Specific Namespace Data 00:10:04.302 =========================== 00:10:04.302 Logical Block Storage Tag Mask: 0 00:10:04.302 Protection Information Capabilities: 00:10:04.302 16b Guard Protection Information Storage Tag Support: No 00:10:04.302 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.302 Storage Tag Check Read Support: No 00:10:04.302 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.302 11:13:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:04.302 11:13:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:04.561 ===================================================== 00:10:04.561 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.561 ===================================================== 00:10:04.561 Controller Capabilities/Features 00:10:04.561 ================================ 00:10:04.561 Vendor ID: 1b36 00:10:04.561 Subsystem Vendor ID: 1af4 00:10:04.561 Serial Number: 12341 00:10:04.561 Model Number: QEMU NVMe Ctrl 00:10:04.561 Firmware Version: 8.0.0 00:10:04.561 Recommended Arb Burst: 6 00:10:04.561 IEEE OUI Identifier: 00 54 52 00:10:04.561 Multi-path I/O 00:10:04.561 May have multiple subsystem ports: No 00:10:04.561 May have multiple controllers: No 00:10:04.561 Associated with SR-IOV VF: No 00:10:04.561 Max Data Transfer Size: 524288 00:10:04.561 Max Number of Namespaces: 256 00:10:04.561 Max Number of I/O Queues: 64 00:10:04.561 NVMe Specification Version (VS): 1.4 00:10:04.561 NVMe Specification Version (Identify): 1.4 00:10:04.561 Maximum Queue Entries: 2048 00:10:04.561 Contiguous Queues Required: Yes 00:10:04.561 Arbitration Mechanisms Supported 00:10:04.561 Weighted Round Robin: Not Supported 00:10:04.561 Vendor Specific: Not Supported 00:10:04.561 Reset Timeout: 7500 ms 00:10:04.561 Doorbell Stride: 4 bytes 00:10:04.561 NVM Subsystem Reset: Not Supported 00:10:04.561 Command Sets Supported 00:10:04.561 NVM Command Set: Supported 00:10:04.561 Boot Partition: Not Supported 00:10:04.561 Memory Page Size Minimum: 4096 bytes 00:10:04.561 Memory Page Size Maximum: 65536 bytes 00:10:04.561 Persistent Memory Region: Not Supported 00:10:04.561 Optional Asynchronous Events Supported 00:10:04.561 Namespace Attribute Notices: Supported 00:10:04.561 Firmware Activation Notices: Not Supported 00:10:04.561 ANA Change Notices: Not Supported 00:10:04.561 PLE Aggregate Log Change Notices: Not Supported 00:10:04.561 LBA Status Info Alert Notices: Not Supported 00:10:04.561 EGE Aggregate Log Change Notices: Not Supported 00:10:04.561 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.561 Zone Descriptor Change Notices: Not Supported 00:10:04.561 Discovery Log Change Notices: Not Supported 00:10:04.561 Controller Attributes 00:10:04.561 128-bit Host Identifier: Not Supported 00:10:04.561 Non-Operational Permissive Mode: Not Supported 00:10:04.561 NVM Sets: Not Supported 00:10:04.561 Read Recovery Levels: Not Supported 00:10:04.561 Endurance Groups: Not Supported 00:10:04.561 Predictable Latency Mode: Not Supported 00:10:04.561 Traffic Based Keep ALive: Not Supported 00:10:04.561 Namespace Granularity: Not Supported 00:10:04.561 SQ Associations: Not Supported 00:10:04.561 UUID List: Not Supported 00:10:04.561 Multi-Domain Subsystem: Not Supported 00:10:04.561 Fixed Capacity Management: Not Supported 00:10:04.561 Variable Capacity Management: Not Supported 00:10:04.561 Delete Endurance Group: Not Supported 00:10:04.561 Delete NVM Set: Not Supported 00:10:04.561 Extended LBA Formats Supported: Supported 00:10:04.561 Flexible Data Placement Supported: Not Supported 00:10:04.561 00:10:04.561 Controller Memory Buffer Support 00:10:04.561 ================================ 00:10:04.561 Supported: No 00:10:04.561 00:10:04.561 Persistent Memory Region Support 00:10:04.561 ================================ 00:10:04.561 Supported: No 00:10:04.561 00:10:04.561 Admin Command Set Attributes 00:10:04.561 ============================ 00:10:04.561 Security Send/Receive: Not Supported 00:10:04.561 Format NVM: Supported 00:10:04.561 Firmware Activate/Download: Not Supported 00:10:04.561 Namespace Management: Supported 00:10:04.561 Device Self-Test: Not Supported 00:10:04.561 Directives: Supported 00:10:04.561 NVMe-MI: Not Supported 00:10:04.561 Virtualization Management: Not Supported 00:10:04.561 Doorbell Buffer Config: Supported 00:10:04.561 Get LBA Status Capability: Not Supported 00:10:04.561 Command & Feature Lockdown Capability: Not Supported 00:10:04.561 Abort Command Limit: 4 00:10:04.561 Async Event Request Limit: 4 00:10:04.561 Number of Firmware Slots: N/A 00:10:04.561 Firmware Slot 1 Read-Only: N/A 00:10:04.561 Firmware Activation Without Reset: N/A 00:10:04.561 Multiple Update Detection Support: N/A 00:10:04.561 Firmware Update Granularity: No Information Provided 00:10:04.561 Per-Namespace SMART Log: Yes 00:10:04.561 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.561 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:04.561 Command Effects Log Page: Supported 00:10:04.561 Get Log Page Extended Data: Supported 00:10:04.561 Telemetry Log Pages: Not Supported 00:10:04.561 Persistent Event Log Pages: Not Supported 00:10:04.561 Supported Log Pages Log Page: May Support 00:10:04.561 Commands Supported & Effects Log Page: Not Supported 00:10:04.561 Feature Identifiers & Effects Log Page:May Support 00:10:04.561 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.561 Data Area 4 for Telemetry Log: Not Supported 00:10:04.561 Error Log Page Entries Supported: 1 00:10:04.561 Keep Alive: Not Supported 00:10:04.561 00:10:04.561 NVM Command Set Attributes 00:10:04.561 ========================== 00:10:04.561 Submission Queue Entry Size 00:10:04.561 Max: 64 00:10:04.561 Min: 64 00:10:04.561 Completion Queue Entry Size 00:10:04.561 Max: 16 00:10:04.561 Min: 16 00:10:04.561 Number of Namespaces: 256 00:10:04.561 Compare Command: Supported 00:10:04.561 Write Uncorrectable Command: Not Supported 00:10:04.561 Dataset Management Command: Supported 00:10:04.561 Write Zeroes Command: Supported 00:10:04.561 Set Features Save Field: Supported 00:10:04.561 Reservations: Not Supported 00:10:04.561 Timestamp: Supported 00:10:04.561 Copy: Supported 00:10:04.561 Volatile Write Cache: Present 00:10:04.561 Atomic Write Unit (Normal): 1 00:10:04.561 Atomic Write Unit (PFail): 1 00:10:04.561 Atomic Compare & Write Unit: 1 00:10:04.561 Fused Compare & Write: Not Supported 00:10:04.561 Scatter-Gather List 00:10:04.561 SGL Command Set: Supported 00:10:04.561 SGL Keyed: Not Supported 00:10:04.561 SGL Bit Bucket Descriptor: Not Supported 00:10:04.561 SGL Metadata Pointer: Not Supported 00:10:04.561 Oversized SGL: Not Supported 00:10:04.561 SGL Metadata Address: Not Supported 00:10:04.561 SGL Offset: Not Supported 00:10:04.561 Transport SGL Data Block: Not Supported 00:10:04.561 Replay Protected Memory Block: Not Supported 00:10:04.561 00:10:04.561 Firmware Slot Information 00:10:04.561 ========================= 00:10:04.561 Active slot: 1 00:10:04.561 Slot 1 Firmware Revision: 1.0 00:10:04.561 00:10:04.561 00:10:04.561 Commands Supported and Effects 00:10:04.561 ============================== 00:10:04.561 Admin Commands 00:10:04.561 -------------- 00:10:04.561 Delete I/O Submission Queue (00h): Supported 00:10:04.561 Create I/O Submission Queue (01h): Supported 00:10:04.561 Get Log Page (02h): Supported 00:10:04.561 Delete I/O Completion Queue (04h): Supported 00:10:04.561 Create I/O Completion Queue (05h): Supported 00:10:04.561 Identify (06h): Supported 00:10:04.562 Abort (08h): Supported 00:10:04.562 Set Features (09h): Supported 00:10:04.562 Get Features (0Ah): Supported 00:10:04.562 Asynchronous Event Request (0Ch): Supported 00:10:04.562 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.562 Directive Send (19h): Supported 00:10:04.562 Directive Receive (1Ah): Supported 00:10:04.562 Virtualization Management (1Ch): Supported 00:10:04.562 Doorbell Buffer Config (7Ch): Supported 00:10:04.562 Format NVM (80h): Supported LBA-Change 00:10:04.562 I/O Commands 00:10:04.562 ------------ 00:10:04.562 Flush (00h): Supported LBA-Change 00:10:04.562 Write (01h): Supported LBA-Change 00:10:04.562 Read (02h): Supported 00:10:04.562 Compare (05h): Supported 00:10:04.562 Write Zeroes (08h): Supported LBA-Change 00:10:04.562 Dataset Management (09h): Supported LBA-Change 00:10:04.562 Unknown (0Ch): Supported 00:10:04.562 Unknown (12h): Supported 00:10:04.562 Copy (19h): Supported LBA-Change 00:10:04.562 Unknown (1Dh): Supported LBA-Change 00:10:04.562 00:10:04.562 Error Log 00:10:04.562 ========= 00:10:04.562 00:10:04.562 Arbitration 00:10:04.562 =========== 00:10:04.562 Arbitration Burst: no limit 00:10:04.562 00:10:04.562 Power Management 00:10:04.562 ================ 00:10:04.562 Number of Power States: 1 00:10:04.562 Current Power State: Power State #0 00:10:04.562 Power State #0: 00:10:04.562 Max Power: 25.00 W 00:10:04.562 Non-Operational State: Operational 00:10:04.562 Entry Latency: 16 microseconds 00:10:04.562 Exit Latency: 4 microseconds 00:10:04.562 Relative Read Throughput: 0 00:10:04.562 Relative Read Latency: 0 00:10:04.562 Relative Write Throughput: 0 00:10:04.562 Relative Write Latency: 0 00:10:04.821 Idle Power: Not Reported 00:10:04.821 Active Power: Not Reported 00:10:04.821 Non-Operational Permissive Mode: Not Supported 00:10:04.821 00:10:04.821 Health Information 00:10:04.821 ================== 00:10:04.821 Critical Warnings: 00:10:04.821 Available Spare Space: OK 00:10:04.821 Temperature: OK 00:10:04.821 Device Reliability: OK 00:10:04.821 Read Only: No 00:10:04.821 Volatile Memory Backup: OK 00:10:04.821 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.821 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.821 Available Spare: 0% 00:10:04.821 Available Spare Threshold: 0% 00:10:04.821 Life Percentage Used: 0% 00:10:04.821 Data Units Read: 1111 00:10:04.821 Data Units Written: 976 00:10:04.821 Host Read Commands: 51128 00:10:04.821 Host Write Commands: 49874 00:10:04.821 Controller Busy Time: 0 minutes 00:10:04.821 Power Cycles: 0 00:10:04.821 Power On Hours: 0 hours 00:10:04.821 Unsafe Shutdowns: 0 00:10:04.821 Unrecoverable Media Errors: 0 00:10:04.821 Lifetime Error Log Entries: 0 00:10:04.821 Warning Temperature Time: 0 minutes 00:10:04.821 Critical Temperature Time: 0 minutes 00:10:04.821 00:10:04.821 Number of Queues 00:10:04.821 ================ 00:10:04.821 Number of I/O Submission Queues: 64 00:10:04.821 Number of I/O Completion Queues: 64 00:10:04.822 00:10:04.822 ZNS Specific Controller Data 00:10:04.822 ============================ 00:10:04.822 Zone Append Size Limit: 0 00:10:04.822 00:10:04.822 00:10:04.822 Active Namespaces 00:10:04.822 ================= 00:10:04.822 Namespace ID:1 00:10:04.822 Error Recovery Timeout: Unlimited 00:10:04.822 Command Set Identifier: NVM (00h) 00:10:04.822 Deallocate: Supported 00:10:04.822 Deallocated/Unwritten Error: Supported 00:10:04.822 Deallocated Read Value: All 0x00 00:10:04.822 Deallocate in Write Zeroes: Not Supported 00:10:04.822 Deallocated Guard Field: 0xFFFF 00:10:04.822 Flush: Supported 00:10:04.822 Reservation: Not Supported 00:10:04.822 Namespace Sharing Capabilities: Private 00:10:04.822 Size (in LBAs): 1310720 (5GiB) 00:10:04.822 Capacity (in LBAs): 1310720 (5GiB) 00:10:04.822 Utilization (in LBAs): 1310720 (5GiB) 00:10:04.822 Thin Provisioning: Not Supported 00:10:04.822 Per-NS Atomic Units: No 00:10:04.822 Maximum Single Source Range Length: 128 00:10:04.822 Maximum Copy Length: 128 00:10:04.822 Maximum Source Range Count: 128 00:10:04.822 NGUID/EUI64 Never Reused: No 00:10:04.822 Namespace Write Protected: No 00:10:04.822 Number of LBA Formats: 8 00:10:04.822 Current LBA Format: LBA Format #04 00:10:04.822 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.822 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.822 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.822 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.822 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.822 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.822 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.822 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.822 00:10:04.822 NVM Specific Namespace Data 00:10:04.822 =========================== 00:10:04.822 Logical Block Storage Tag Mask: 0 00:10:04.822 Protection Information Capabilities: 00:10:04.822 16b Guard Protection Information Storage Tag Support: No 00:10:04.822 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.822 Storage Tag Check Read Support: No 00:10:04.822 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.822 11:13:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:04.822 11:13:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:05.152 ===================================================== 00:10:05.152 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:05.152 ===================================================== 00:10:05.152 Controller Capabilities/Features 00:10:05.152 ================================ 00:10:05.152 Vendor ID: 1b36 00:10:05.152 Subsystem Vendor ID: 1af4 00:10:05.152 Serial Number: 12342 00:10:05.152 Model Number: QEMU NVMe Ctrl 00:10:05.152 Firmware Version: 8.0.0 00:10:05.152 Recommended Arb Burst: 6 00:10:05.152 IEEE OUI Identifier: 00 54 52 00:10:05.152 Multi-path I/O 00:10:05.152 May have multiple subsystem ports: No 00:10:05.152 May have multiple controllers: No 00:10:05.152 Associated with SR-IOV VF: No 00:10:05.152 Max Data Transfer Size: 524288 00:10:05.152 Max Number of Namespaces: 256 00:10:05.152 Max Number of I/O Queues: 64 00:10:05.152 NVMe Specification Version (VS): 1.4 00:10:05.152 NVMe Specification Version (Identify): 1.4 00:10:05.152 Maximum Queue Entries: 2048 00:10:05.152 Contiguous Queues Required: Yes 00:10:05.152 Arbitration Mechanisms Supported 00:10:05.152 Weighted Round Robin: Not Supported 00:10:05.152 Vendor Specific: Not Supported 00:10:05.152 Reset Timeout: 7500 ms 00:10:05.152 Doorbell Stride: 4 bytes 00:10:05.153 NVM Subsystem Reset: Not Supported 00:10:05.153 Command Sets Supported 00:10:05.153 NVM Command Set: Supported 00:10:05.153 Boot Partition: Not Supported 00:10:05.153 Memory Page Size Minimum: 4096 bytes 00:10:05.153 Memory Page Size Maximum: 65536 bytes 00:10:05.153 Persistent Memory Region: Not Supported 00:10:05.153 Optional Asynchronous Events Supported 00:10:05.153 Namespace Attribute Notices: Supported 00:10:05.153 Firmware Activation Notices: Not Supported 00:10:05.153 ANA Change Notices: Not Supported 00:10:05.153 PLE Aggregate Log Change Notices: Not Supported 00:10:05.153 LBA Status Info Alert Notices: Not Supported 00:10:05.153 EGE Aggregate Log Change Notices: Not Supported 00:10:05.153 Normal NVM Subsystem Shutdown event: Not Supported 00:10:05.153 Zone Descriptor Change Notices: Not Supported 00:10:05.153 Discovery Log Change Notices: Not Supported 00:10:05.153 Controller Attributes 00:10:05.153 128-bit Host Identifier: Not Supported 00:10:05.153 Non-Operational Permissive Mode: Not Supported 00:10:05.153 NVM Sets: Not Supported 00:10:05.153 Read Recovery Levels: Not Supported 00:10:05.153 Endurance Groups: Not Supported 00:10:05.153 Predictable Latency Mode: Not Supported 00:10:05.153 Traffic Based Keep ALive: Not Supported 00:10:05.153 Namespace Granularity: Not Supported 00:10:05.153 SQ Associations: Not Supported 00:10:05.153 UUID List: Not Supported 00:10:05.153 Multi-Domain Subsystem: Not Supported 00:10:05.153 Fixed Capacity Management: Not Supported 00:10:05.153 Variable Capacity Management: Not Supported 00:10:05.153 Delete Endurance Group: Not Supported 00:10:05.153 Delete NVM Set: Not Supported 00:10:05.153 Extended LBA Formats Supported: Supported 00:10:05.153 Flexible Data Placement Supported: Not Supported 00:10:05.153 00:10:05.153 Controller Memory Buffer Support 00:10:05.153 ================================ 00:10:05.153 Supported: No 00:10:05.153 00:10:05.153 Persistent Memory Region Support 00:10:05.153 ================================ 00:10:05.153 Supported: No 00:10:05.153 00:10:05.153 Admin Command Set Attributes 00:10:05.153 ============================ 00:10:05.153 Security Send/Receive: Not Supported 00:10:05.153 Format NVM: Supported 00:10:05.153 Firmware Activate/Download: Not Supported 00:10:05.153 Namespace Management: Supported 00:10:05.153 Device Self-Test: Not Supported 00:10:05.153 Directives: Supported 00:10:05.153 NVMe-MI: Not Supported 00:10:05.153 Virtualization Management: Not Supported 00:10:05.153 Doorbell Buffer Config: Supported 00:10:05.153 Get LBA Status Capability: Not Supported 00:10:05.153 Command & Feature Lockdown Capability: Not Supported 00:10:05.153 Abort Command Limit: 4 00:10:05.153 Async Event Request Limit: 4 00:10:05.153 Number of Firmware Slots: N/A 00:10:05.153 Firmware Slot 1 Read-Only: N/A 00:10:05.153 Firmware Activation Without Reset: N/A 00:10:05.153 Multiple Update Detection Support: N/A 00:10:05.153 Firmware Update Granularity: No Information Provided 00:10:05.153 Per-Namespace SMART Log: Yes 00:10:05.153 Asymmetric Namespace Access Log Page: Not Supported 00:10:05.153 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:05.153 Command Effects Log Page: Supported 00:10:05.153 Get Log Page Extended Data: Supported 00:10:05.153 Telemetry Log Pages: Not Supported 00:10:05.153 Persistent Event Log Pages: Not Supported 00:10:05.153 Supported Log Pages Log Page: May Support 00:10:05.153 Commands Supported & Effects Log Page: Not Supported 00:10:05.153 Feature Identifiers & Effects Log Page:May Support 00:10:05.153 NVMe-MI Commands & Effects Log Page: May Support 00:10:05.153 Data Area 4 for Telemetry Log: Not Supported 00:10:05.153 Error Log Page Entries Supported: 1 00:10:05.153 Keep Alive: Not Supported 00:10:05.153 00:10:05.153 NVM Command Set Attributes 00:10:05.153 ========================== 00:10:05.153 Submission Queue Entry Size 00:10:05.153 Max: 64 00:10:05.153 Min: 64 00:10:05.153 Completion Queue Entry Size 00:10:05.153 Max: 16 00:10:05.153 Min: 16 00:10:05.153 Number of Namespaces: 256 00:10:05.153 Compare Command: Supported 00:10:05.153 Write Uncorrectable Command: Not Supported 00:10:05.153 Dataset Management Command: Supported 00:10:05.153 Write Zeroes Command: Supported 00:10:05.153 Set Features Save Field: Supported 00:10:05.153 Reservations: Not Supported 00:10:05.153 Timestamp: Supported 00:10:05.153 Copy: Supported 00:10:05.153 Volatile Write Cache: Present 00:10:05.153 Atomic Write Unit (Normal): 1 00:10:05.153 Atomic Write Unit (PFail): 1 00:10:05.153 Atomic Compare & Write Unit: 1 00:10:05.153 Fused Compare & Write: Not Supported 00:10:05.153 Scatter-Gather List 00:10:05.153 SGL Command Set: Supported 00:10:05.153 SGL Keyed: Not Supported 00:10:05.153 SGL Bit Bucket Descriptor: Not Supported 00:10:05.153 SGL Metadata Pointer: Not Supported 00:10:05.153 Oversized SGL: Not Supported 00:10:05.153 SGL Metadata Address: Not Supported 00:10:05.153 SGL Offset: Not Supported 00:10:05.153 Transport SGL Data Block: Not Supported 00:10:05.153 Replay Protected Memory Block: Not Supported 00:10:05.153 00:10:05.153 Firmware Slot Information 00:10:05.153 ========================= 00:10:05.153 Active slot: 1 00:10:05.153 Slot 1 Firmware Revision: 1.0 00:10:05.153 00:10:05.153 00:10:05.153 Commands Supported and Effects 00:10:05.153 ============================== 00:10:05.153 Admin Commands 00:10:05.153 -------------- 00:10:05.153 Delete I/O Submission Queue (00h): Supported 00:10:05.153 Create I/O Submission Queue (01h): Supported 00:10:05.153 Get Log Page (02h): Supported 00:10:05.153 Delete I/O Completion Queue (04h): Supported 00:10:05.153 Create I/O Completion Queue (05h): Supported 00:10:05.153 Identify (06h): Supported 00:10:05.153 Abort (08h): Supported 00:10:05.153 Set Features (09h): Supported 00:10:05.153 Get Features (0Ah): Supported 00:10:05.153 Asynchronous Event Request (0Ch): Supported 00:10:05.153 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:05.153 Directive Send (19h): Supported 00:10:05.153 Directive Receive (1Ah): Supported 00:10:05.153 Virtualization Management (1Ch): Supported 00:10:05.153 Doorbell Buffer Config (7Ch): Supported 00:10:05.153 Format NVM (80h): Supported LBA-Change 00:10:05.153 I/O Commands 00:10:05.153 ------------ 00:10:05.153 Flush (00h): Supported LBA-Change 00:10:05.153 Write (01h): Supported LBA-Change 00:10:05.153 Read (02h): Supported 00:10:05.153 Compare (05h): Supported 00:10:05.153 Write Zeroes (08h): Supported LBA-Change 00:10:05.153 Dataset Management (09h): Supported LBA-Change 00:10:05.153 Unknown (0Ch): Supported 00:10:05.153 Unknown (12h): Supported 00:10:05.153 Copy (19h): Supported LBA-Change 00:10:05.153 Unknown (1Dh): Supported LBA-Change 00:10:05.153 00:10:05.153 Error Log 00:10:05.153 ========= 00:10:05.153 00:10:05.153 Arbitration 00:10:05.153 =========== 00:10:05.153 Arbitration Burst: no limit 00:10:05.153 00:10:05.153 Power Management 00:10:05.153 ================ 00:10:05.153 Number of Power States: 1 00:10:05.153 Current Power State: Power State #0 00:10:05.153 Power State #0: 00:10:05.153 Max Power: 25.00 W 00:10:05.153 Non-Operational State: Operational 00:10:05.153 Entry Latency: 16 microseconds 00:10:05.153 Exit Latency: 4 microseconds 00:10:05.153 Relative Read Throughput: 0 00:10:05.153 Relative Read Latency: 0 00:10:05.153 Relative Write Throughput: 0 00:10:05.153 Relative Write Latency: 0 00:10:05.153 Idle Power: Not Reported 00:10:05.153 Active Power: Not Reported 00:10:05.153 Non-Operational Permissive Mode: Not Supported 00:10:05.153 00:10:05.153 Health Information 00:10:05.153 ================== 00:10:05.153 Critical Warnings: 00:10:05.153 Available Spare Space: OK 00:10:05.153 Temperature: OK 00:10:05.153 Device Reliability: OK 00:10:05.153 Read Only: No 00:10:05.153 Volatile Memory Backup: OK 00:10:05.153 Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.153 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:05.153 Available Spare: 0% 00:10:05.153 Available Spare Threshold: 0% 00:10:05.153 Life Percentage Used: 0% 00:10:05.153 Data Units Read: 2325 00:10:05.153 Data Units Written: 2113 00:10:05.153 Host Read Commands: 106428 00:10:05.153 Host Write Commands: 104697 00:10:05.153 Controller Busy Time: 0 minutes 00:10:05.153 Power Cycles: 0 00:10:05.153 Power On Hours: 0 hours 00:10:05.153 Unsafe Shutdowns: 0 00:10:05.153 Unrecoverable Media Errors: 0 00:10:05.153 Lifetime Error Log Entries: 0 00:10:05.153 Warning Temperature Time: 0 minutes 00:10:05.153 Critical Temperature Time: 0 minutes 00:10:05.153 00:10:05.153 Number of Queues 00:10:05.153 ================ 00:10:05.153 Number of I/O Submission Queues: 64 00:10:05.154 Number of I/O Completion Queues: 64 00:10:05.154 00:10:05.154 ZNS Specific Controller Data 00:10:05.154 ============================ 00:10:05.154 Zone Append Size Limit: 0 00:10:05.154 00:10:05.154 00:10:05.154 Active Namespaces 00:10:05.154 ================= 00:10:05.154 Namespace ID:1 00:10:05.154 Error Recovery Timeout: Unlimited 00:10:05.154 Command Set Identifier: NVM (00h) 00:10:05.154 Deallocate: Supported 00:10:05.154 Deallocated/Unwritten Error: Supported 00:10:05.154 Deallocated Read Value: All 0x00 00:10:05.154 Deallocate in Write Zeroes: Not Supported 00:10:05.154 Deallocated Guard Field: 0xFFFF 00:10:05.154 Flush: Supported 00:10:05.154 Reservation: Not Supported 00:10:05.154 Namespace Sharing Capabilities: Private 00:10:05.154 Size (in LBAs): 1048576 (4GiB) 00:10:05.154 Capacity (in LBAs): 1048576 (4GiB) 00:10:05.154 Utilization (in LBAs): 1048576 (4GiB) 00:10:05.154 Thin Provisioning: Not Supported 00:10:05.154 Per-NS Atomic Units: No 00:10:05.154 Maximum Single Source Range Length: 128 00:10:05.154 Maximum Copy Length: 128 00:10:05.154 Maximum Source Range Count: 128 00:10:05.154 NGUID/EUI64 Never Reused: No 00:10:05.154 Namespace Write Protected: No 00:10:05.154 Number of LBA Formats: 8 00:10:05.154 Current LBA Format: LBA Format #04 00:10:05.154 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.154 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.154 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.154 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.154 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.154 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.154 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.154 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.154 00:10:05.154 NVM Specific Namespace Data 00:10:05.154 =========================== 00:10:05.154 Logical Block Storage Tag Mask: 0 00:10:05.154 Protection Information Capabilities: 00:10:05.154 16b Guard Protection Information Storage Tag Support: No 00:10:05.154 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.154 Storage Tag Check Read Support: No 00:10:05.154 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Namespace ID:2 00:10:05.154 Error Recovery Timeout: Unlimited 00:10:05.154 Command Set Identifier: NVM (00h) 00:10:05.154 Deallocate: Supported 00:10:05.154 Deallocated/Unwritten Error: Supported 00:10:05.154 Deallocated Read Value: All 0x00 00:10:05.154 Deallocate in Write Zeroes: Not Supported 00:10:05.154 Deallocated Guard Field: 0xFFFF 00:10:05.154 Flush: Supported 00:10:05.154 Reservation: Not Supported 00:10:05.154 Namespace Sharing Capabilities: Private 00:10:05.154 Size (in LBAs): 1048576 (4GiB) 00:10:05.154 Capacity (in LBAs): 1048576 (4GiB) 00:10:05.154 Utilization (in LBAs): 1048576 (4GiB) 00:10:05.154 Thin Provisioning: Not Supported 00:10:05.154 Per-NS Atomic Units: No 00:10:05.154 Maximum Single Source Range Length: 128 00:10:05.154 Maximum Copy Length: 128 00:10:05.154 Maximum Source Range Count: 128 00:10:05.154 NGUID/EUI64 Never Reused: No 00:10:05.154 Namespace Write Protected: No 00:10:05.154 Number of LBA Formats: 8 00:10:05.154 Current LBA Format: LBA Format #04 00:10:05.154 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.154 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.154 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.154 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.154 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.154 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.154 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.154 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.154 00:10:05.154 NVM Specific Namespace Data 00:10:05.154 =========================== 00:10:05.154 Logical Block Storage Tag Mask: 0 00:10:05.154 Protection Information Capabilities: 00:10:05.154 16b Guard Protection Information Storage Tag Support: No 00:10:05.154 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.154 Storage Tag Check Read Support: No 00:10:05.154 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Namespace ID:3 00:10:05.154 Error Recovery Timeout: Unlimited 00:10:05.154 Command Set Identifier: NVM (00h) 00:10:05.154 Deallocate: Supported 00:10:05.154 Deallocated/Unwritten Error: Supported 00:10:05.154 Deallocated Read Value: All 0x00 00:10:05.154 Deallocate in Write Zeroes: Not Supported 00:10:05.154 Deallocated Guard Field: 0xFFFF 00:10:05.154 Flush: Supported 00:10:05.154 Reservation: Not Supported 00:10:05.154 Namespace Sharing Capabilities: Private 00:10:05.154 Size (in LBAs): 1048576 (4GiB) 00:10:05.154 Capacity (in LBAs): 1048576 (4GiB) 00:10:05.154 Utilization (in LBAs): 1048576 (4GiB) 00:10:05.154 Thin Provisioning: Not Supported 00:10:05.154 Per-NS Atomic Units: No 00:10:05.154 Maximum Single Source Range Length: 128 00:10:05.154 Maximum Copy Length: 128 00:10:05.154 Maximum Source Range Count: 128 00:10:05.154 NGUID/EUI64 Never Reused: No 00:10:05.154 Namespace Write Protected: No 00:10:05.154 Number of LBA Formats: 8 00:10:05.154 Current LBA Format: LBA Format #04 00:10:05.154 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.154 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.154 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.154 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.154 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.154 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.154 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.154 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.154 00:10:05.154 NVM Specific Namespace Data 00:10:05.154 =========================== 00:10:05.154 Logical Block Storage Tag Mask: 0 00:10:05.154 Protection Information Capabilities: 00:10:05.154 16b Guard Protection Information Storage Tag Support: No 00:10:05.154 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.154 Storage Tag Check Read Support: No 00:10:05.154 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.154 11:13:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:05.154 11:13:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:05.414 ===================================================== 00:10:05.414 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:05.414 ===================================================== 00:10:05.414 Controller Capabilities/Features 00:10:05.414 ================================ 00:10:05.414 Vendor ID: 1b36 00:10:05.414 Subsystem Vendor ID: 1af4 00:10:05.414 Serial Number: 12343 00:10:05.414 Model Number: QEMU NVMe Ctrl 00:10:05.414 Firmware Version: 8.0.0 00:10:05.414 Recommended Arb Burst: 6 00:10:05.414 IEEE OUI Identifier: 00 54 52 00:10:05.414 Multi-path I/O 00:10:05.414 May have multiple subsystem ports: No 00:10:05.414 May have multiple controllers: Yes 00:10:05.414 Associated with SR-IOV VF: No 00:10:05.414 Max Data Transfer Size: 524288 00:10:05.414 Max Number of Namespaces: 256 00:10:05.414 Max Number of I/O Queues: 64 00:10:05.414 NVMe Specification Version (VS): 1.4 00:10:05.414 NVMe Specification Version (Identify): 1.4 00:10:05.414 Maximum Queue Entries: 2048 00:10:05.414 Contiguous Queues Required: Yes 00:10:05.414 Arbitration Mechanisms Supported 00:10:05.414 Weighted Round Robin: Not Supported 00:10:05.414 Vendor Specific: Not Supported 00:10:05.414 Reset Timeout: 7500 ms 00:10:05.414 Doorbell Stride: 4 bytes 00:10:05.414 NVM Subsystem Reset: Not Supported 00:10:05.414 Command Sets Supported 00:10:05.414 NVM Command Set: Supported 00:10:05.414 Boot Partition: Not Supported 00:10:05.414 Memory Page Size Minimum: 4096 bytes 00:10:05.414 Memory Page Size Maximum: 65536 bytes 00:10:05.414 Persistent Memory Region: Not Supported 00:10:05.414 Optional Asynchronous Events Supported 00:10:05.414 Namespace Attribute Notices: Supported 00:10:05.414 Firmware Activation Notices: Not Supported 00:10:05.414 ANA Change Notices: Not Supported 00:10:05.414 PLE Aggregate Log Change Notices: Not Supported 00:10:05.414 LBA Status Info Alert Notices: Not Supported 00:10:05.414 EGE Aggregate Log Change Notices: Not Supported 00:10:05.414 Normal NVM Subsystem Shutdown event: Not Supported 00:10:05.414 Zone Descriptor Change Notices: Not Supported 00:10:05.414 Discovery Log Change Notices: Not Supported 00:10:05.414 Controller Attributes 00:10:05.414 128-bit Host Identifier: Not Supported 00:10:05.414 Non-Operational Permissive Mode: Not Supported 00:10:05.414 NVM Sets: Not Supported 00:10:05.414 Read Recovery Levels: Not Supported 00:10:05.414 Endurance Groups: Supported 00:10:05.414 Predictable Latency Mode: Not Supported 00:10:05.414 Traffic Based Keep ALive: Not Supported 00:10:05.414 Namespace Granularity: Not Supported 00:10:05.414 SQ Associations: Not Supported 00:10:05.414 UUID List: Not Supported 00:10:05.414 Multi-Domain Subsystem: Not Supported 00:10:05.414 Fixed Capacity Management: Not Supported 00:10:05.414 Variable Capacity Management: Not Supported 00:10:05.414 Delete Endurance Group: Not Supported 00:10:05.414 Delete NVM Set: Not Supported 00:10:05.414 Extended LBA Formats Supported: Supported 00:10:05.414 Flexible Data Placement Supported: Supported 00:10:05.414 00:10:05.414 Controller Memory Buffer Support 00:10:05.414 ================================ 00:10:05.414 Supported: No 00:10:05.414 00:10:05.414 Persistent Memory Region Support 00:10:05.414 ================================ 00:10:05.414 Supported: No 00:10:05.414 00:10:05.414 Admin Command Set Attributes 00:10:05.414 ============================ 00:10:05.414 Security Send/Receive: Not Supported 00:10:05.414 Format NVM: Supported 00:10:05.414 Firmware Activate/Download: Not Supported 00:10:05.414 Namespace Management: Supported 00:10:05.414 Device Self-Test: Not Supported 00:10:05.414 Directives: Supported 00:10:05.414 NVMe-MI: Not Supported 00:10:05.414 Virtualization Management: Not Supported 00:10:05.414 Doorbell Buffer Config: Supported 00:10:05.414 Get LBA Status Capability: Not Supported 00:10:05.414 Command & Feature Lockdown Capability: Not Supported 00:10:05.414 Abort Command Limit: 4 00:10:05.414 Async Event Request Limit: 4 00:10:05.414 Number of Firmware Slots: N/A 00:10:05.414 Firmware Slot 1 Read-Only: N/A 00:10:05.414 Firmware Activation Without Reset: N/A 00:10:05.414 Multiple Update Detection Support: N/A 00:10:05.414 Firmware Update Granularity: No Information Provided 00:10:05.414 Per-Namespace SMART Log: Yes 00:10:05.414 Asymmetric Namespace Access Log Page: Not Supported 00:10:05.414 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:05.414 Command Effects Log Page: Supported 00:10:05.414 Get Log Page Extended Data: Supported 00:10:05.414 Telemetry Log Pages: Not Supported 00:10:05.414 Persistent Event Log Pages: Not Supported 00:10:05.414 Supported Log Pages Log Page: May Support 00:10:05.414 Commands Supported & Effects Log Page: Not Supported 00:10:05.414 Feature Identifiers & Effects Log Page:May Support 00:10:05.414 NVMe-MI Commands & Effects Log Page: May Support 00:10:05.414 Data Area 4 for Telemetry Log: Not Supported 00:10:05.414 Error Log Page Entries Supported: 1 00:10:05.414 Keep Alive: Not Supported 00:10:05.414 00:10:05.414 NVM Command Set Attributes 00:10:05.414 ========================== 00:10:05.414 Submission Queue Entry Size 00:10:05.414 Max: 64 00:10:05.414 Min: 64 00:10:05.414 Completion Queue Entry Size 00:10:05.414 Max: 16 00:10:05.414 Min: 16 00:10:05.414 Number of Namespaces: 256 00:10:05.414 Compare Command: Supported 00:10:05.414 Write Uncorrectable Command: Not Supported 00:10:05.414 Dataset Management Command: Supported 00:10:05.414 Write Zeroes Command: Supported 00:10:05.415 Set Features Save Field: Supported 00:10:05.415 Reservations: Not Supported 00:10:05.415 Timestamp: Supported 00:10:05.415 Copy: Supported 00:10:05.415 Volatile Write Cache: Present 00:10:05.415 Atomic Write Unit (Normal): 1 00:10:05.415 Atomic Write Unit (PFail): 1 00:10:05.415 Atomic Compare & Write Unit: 1 00:10:05.415 Fused Compare & Write: Not Supported 00:10:05.415 Scatter-Gather List 00:10:05.415 SGL Command Set: Supported 00:10:05.415 SGL Keyed: Not Supported 00:10:05.415 SGL Bit Bucket Descriptor: Not Supported 00:10:05.415 SGL Metadata Pointer: Not Supported 00:10:05.415 Oversized SGL: Not Supported 00:10:05.415 SGL Metadata Address: Not Supported 00:10:05.415 SGL Offset: Not Supported 00:10:05.415 Transport SGL Data Block: Not Supported 00:10:05.415 Replay Protected Memory Block: Not Supported 00:10:05.415 00:10:05.415 Firmware Slot Information 00:10:05.415 ========================= 00:10:05.415 Active slot: 1 00:10:05.415 Slot 1 Firmware Revision: 1.0 00:10:05.415 00:10:05.415 00:10:05.415 Commands Supported and Effects 00:10:05.415 ============================== 00:10:05.415 Admin Commands 00:10:05.415 -------------- 00:10:05.415 Delete I/O Submission Queue (00h): Supported 00:10:05.415 Create I/O Submission Queue (01h): Supported 00:10:05.415 Get Log Page (02h): Supported 00:10:05.415 Delete I/O Completion Queue (04h): Supported 00:10:05.415 Create I/O Completion Queue (05h): Supported 00:10:05.415 Identify (06h): Supported 00:10:05.415 Abort (08h): Supported 00:10:05.415 Set Features (09h): Supported 00:10:05.415 Get Features (0Ah): Supported 00:10:05.415 Asynchronous Event Request (0Ch): Supported 00:10:05.415 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:05.415 Directive Send (19h): Supported 00:10:05.415 Directive Receive (1Ah): Supported 00:10:05.415 Virtualization Management (1Ch): Supported 00:10:05.415 Doorbell Buffer Config (7Ch): Supported 00:10:05.415 Format NVM (80h): Supported LBA-Change 00:10:05.415 I/O Commands 00:10:05.415 ------------ 00:10:05.415 Flush (00h): Supported LBA-Change 00:10:05.415 Write (01h): Supported LBA-Change 00:10:05.415 Read (02h): Supported 00:10:05.415 Compare (05h): Supported 00:10:05.415 Write Zeroes (08h): Supported LBA-Change 00:10:05.415 Dataset Management (09h): Supported LBA-Change 00:10:05.415 Unknown (0Ch): Supported 00:10:05.415 Unknown (12h): Supported 00:10:05.415 Copy (19h): Supported LBA-Change 00:10:05.415 Unknown (1Dh): Supported LBA-Change 00:10:05.415 00:10:05.415 Error Log 00:10:05.415 ========= 00:10:05.415 00:10:05.415 Arbitration 00:10:05.415 =========== 00:10:05.415 Arbitration Burst: no limit 00:10:05.415 00:10:05.415 Power Management 00:10:05.415 ================ 00:10:05.415 Number of Power States: 1 00:10:05.415 Current Power State: Power State #0 00:10:05.415 Power State #0: 00:10:05.415 Max Power: 25.00 W 00:10:05.415 Non-Operational State: Operational 00:10:05.415 Entry Latency: 16 microseconds 00:10:05.415 Exit Latency: 4 microseconds 00:10:05.415 Relative Read Throughput: 0 00:10:05.415 Relative Read Latency: 0 00:10:05.415 Relative Write Throughput: 0 00:10:05.415 Relative Write Latency: 0 00:10:05.415 Idle Power: Not Reported 00:10:05.415 Active Power: Not Reported 00:10:05.415 Non-Operational Permissive Mode: Not Supported 00:10:05.415 00:10:05.415 Health Information 00:10:05.415 ================== 00:10:05.415 Critical Warnings: 00:10:05.415 Available Spare Space: OK 00:10:05.415 Temperature: OK 00:10:05.415 Device Reliability: OK 00:10:05.415 Read Only: No 00:10:05.415 Volatile Memory Backup: OK 00:10:05.415 Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.415 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:05.415 Available Spare: 0% 00:10:05.415 Available Spare Threshold: 0% 00:10:05.415 Life Percentage Used: 0% 00:10:05.415 Data Units Read: 838 00:10:05.415 Data Units Written: 767 00:10:05.415 Host Read Commands: 35961 00:10:05.415 Host Write Commands: 35384 00:10:05.415 Controller Busy Time: 0 minutes 00:10:05.415 Power Cycles: 0 00:10:05.415 Power On Hours: 0 hours 00:10:05.415 Unsafe Shutdowns: 0 00:10:05.415 Unrecoverable Media Errors: 0 00:10:05.415 Lifetime Error Log Entries: 0 00:10:05.415 Warning Temperature Time: 0 minutes 00:10:05.415 Critical Temperature Time: 0 minutes 00:10:05.415 00:10:05.415 Number of Queues 00:10:05.415 ================ 00:10:05.415 Number of I/O Submission Queues: 64 00:10:05.415 Number of I/O Completion Queues: 64 00:10:05.415 00:10:05.415 ZNS Specific Controller Data 00:10:05.415 ============================ 00:10:05.415 Zone Append Size Limit: 0 00:10:05.415 00:10:05.415 00:10:05.415 Active Namespaces 00:10:05.415 ================= 00:10:05.415 Namespace ID:1 00:10:05.415 Error Recovery Timeout: Unlimited 00:10:05.415 Command Set Identifier: NVM (00h) 00:10:05.415 Deallocate: Supported 00:10:05.415 Deallocated/Unwritten Error: Supported 00:10:05.415 Deallocated Read Value: All 0x00 00:10:05.415 Deallocate in Write Zeroes: Not Supported 00:10:05.415 Deallocated Guard Field: 0xFFFF 00:10:05.415 Flush: Supported 00:10:05.415 Reservation: Not Supported 00:10:05.415 Namespace Sharing Capabilities: Multiple Controllers 00:10:05.415 Size (in LBAs): 262144 (1GiB) 00:10:05.415 Capacity (in LBAs): 262144 (1GiB) 00:10:05.415 Utilization (in LBAs): 262144 (1GiB) 00:10:05.415 Thin Provisioning: Not Supported 00:10:05.415 Per-NS Atomic Units: No 00:10:05.415 Maximum Single Source Range Length: 128 00:10:05.415 Maximum Copy Length: 128 00:10:05.415 Maximum Source Range Count: 128 00:10:05.415 NGUID/EUI64 Never Reused: No 00:10:05.415 Namespace Write Protected: No 00:10:05.415 Endurance group ID: 1 00:10:05.415 Number of LBA Formats: 8 00:10:05.415 Current LBA Format: LBA Format #04 00:10:05.415 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.415 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.415 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.415 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.415 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.415 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.415 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.415 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.415 00:10:05.415 Get Feature FDP: 00:10:05.415 ================ 00:10:05.415 Enabled: Yes 00:10:05.415 FDP configuration index: 0 00:10:05.415 00:10:05.415 FDP configurations log page 00:10:05.415 =========================== 00:10:05.415 Number of FDP configurations: 1 00:10:05.415 Version: 0 00:10:05.415 Size: 112 00:10:05.415 FDP Configuration Descriptor: 0 00:10:05.415 Descriptor Size: 96 00:10:05.415 Reclaim Group Identifier format: 2 00:10:05.415 FDP Volatile Write Cache: Not Present 00:10:05.415 FDP Configuration: Valid 00:10:05.415 Vendor Specific Size: 0 00:10:05.415 Number of Reclaim Groups: 2 00:10:05.415 Number of Recalim Unit Handles: 8 00:10:05.415 Max Placement Identifiers: 128 00:10:05.415 Number of Namespaces Suppprted: 256 00:10:05.415 Reclaim unit Nominal Size: 6000000 bytes 00:10:05.415 Estimated Reclaim Unit Time Limit: Not Reported 00:10:05.415 RUH Desc #000: RUH Type: Initially Isolated 00:10:05.415 RUH Desc #001: RUH Type: Initially Isolated 00:10:05.415 RUH Desc #002: RUH Type: Initially Isolated 00:10:05.415 RUH Desc #003: RUH Type: Initially Isolated 00:10:05.415 RUH Desc #004: RUH Type: Initially Isolated 00:10:05.415 RUH Desc #005: RUH Type: Initially Isolated 00:10:05.415 RUH Desc #006: RUH Type: Initially Isolated 00:10:05.415 RUH Desc #007: RUH Type: Initially Isolated 00:10:05.415 00:10:05.415 FDP reclaim unit handle usage log page 00:10:05.415 ====================================== 00:10:05.415 Number of Reclaim Unit Handles: 8 00:10:05.415 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:05.415 RUH Usage Desc #001: RUH Attributes: Unused 00:10:05.415 RUH Usage Desc #002: RUH Attributes: Unused 00:10:05.415 RUH Usage Desc #003: RUH Attributes: Unused 00:10:05.415 RUH Usage Desc #004: RUH Attributes: Unused 00:10:05.415 RUH Usage Desc #005: RUH Attributes: Unused 00:10:05.415 RUH Usage Desc #006: RUH Attributes: Unused 00:10:05.415 RUH Usage Desc #007: RUH Attributes: Unused 00:10:05.415 00:10:05.415 FDP statistics log page 00:10:05.415 ======================= 00:10:05.415 Host bytes with metadata written: 495296512 00:10:05.415 Media bytes with metadata written: 495349760 00:10:05.416 Media bytes erased: 0 00:10:05.416 00:10:05.416 FDP events log page 00:10:05.416 =================== 00:10:05.416 Number of FDP events: 0 00:10:05.416 00:10:05.416 NVM Specific Namespace Data 00:10:05.416 =========================== 00:10:05.416 Logical Block Storage Tag Mask: 0 00:10:05.416 Protection Information Capabilities: 00:10:05.416 16b Guard Protection Information Storage Tag Support: No 00:10:05.416 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.416 Storage Tag Check Read Support: No 00:10:05.416 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.416 ************************************ 00:10:05.416 END TEST nvme_identify 00:10:05.416 ************************************ 00:10:05.416 00:10:05.416 real 0m1.867s 00:10:05.416 user 0m0.707s 00:10:05.416 sys 0m0.945s 00:10:05.416 11:13:42 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.416 11:13:42 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:05.416 11:13:42 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:05.416 11:13:42 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.416 11:13:42 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.416 11:13:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.416 ************************************ 00:10:05.416 START TEST nvme_perf 00:10:05.416 ************************************ 00:10:05.416 11:13:42 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:10:05.416 11:13:42 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:06.794 Initializing NVMe Controllers 00:10:06.794 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:06.794 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:06.794 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:06.794 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:06.794 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:06.794 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:06.794 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:06.794 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:06.794 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:06.794 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:06.794 Initialization complete. Launching workers. 00:10:06.794 ======================================================== 00:10:06.794 Latency(us) 00:10:06.794 Device Information : IOPS MiB/s Average min max 00:10:06.794 PCIE (0000:00:10.0) NSID 1 from core 0: 13438.36 157.48 9546.99 8252.13 56915.44 00:10:06.794 PCIE (0000:00:11.0) NSID 1 from core 0: 13438.36 157.48 9531.15 8342.09 54808.24 00:10:06.794 PCIE (0000:00:13.0) NSID 1 from core 0: 13438.36 157.48 9512.35 8314.09 53373.11 00:10:06.794 PCIE (0000:00:12.0) NSID 1 from core 0: 13438.36 157.48 9495.54 8293.41 51240.07 00:10:06.794 PCIE (0000:00:12.0) NSID 2 from core 0: 13438.36 157.48 9478.25 8356.38 49341.32 00:10:06.794 PCIE (0000:00:12.0) NSID 3 from core 0: 13502.35 158.23 9415.12 8334.69 41402.61 00:10:06.794 ======================================================== 00:10:06.794 Total : 80694.16 945.63 9496.50 8252.13 56915.44 00:10:06.794 00:10:06.794 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:06.794 ================================================================================= 00:10:06.794 1.00000% : 8422.297us 00:10:06.794 10.00000% : 8632.855us 00:10:06.794 25.00000% : 8843.412us 00:10:06.794 50.00000% : 9159.248us 00:10:06.794 75.00000% : 9422.445us 00:10:06.794 90.00000% : 9633.002us 00:10:06.794 95.00000% : 9843.560us 00:10:06.794 98.00000% : 10948.986us 00:10:06.794 99.00000% : 13475.676us 00:10:06.794 99.50000% : 48217.651us 00:10:06.794 99.90000% : 56850.506us 00:10:06.794 99.99000% : 57271.621us 00:10:06.794 99.99900% : 57271.621us 00:10:06.794 99.99990% : 57271.621us 00:10:06.794 99.99999% : 57271.621us 00:10:06.794 00:10:06.794 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:06.794 ================================================================================= 00:10:06.794 1.00000% : 8527.576us 00:10:06.794 10.00000% : 8738.133us 00:10:06.794 25.00000% : 8896.051us 00:10:06.794 50.00000% : 9159.248us 00:10:06.794 75.00000% : 9369.806us 00:10:06.794 90.00000% : 9633.002us 00:10:06.794 95.00000% : 9790.920us 00:10:06.794 98.00000% : 11001.626us 00:10:06.794 99.00000% : 13001.921us 00:10:06.794 99.50000% : 46533.192us 00:10:06.794 99.90000% : 54744.932us 00:10:06.794 99.99000% : 55166.047us 00:10:06.794 99.99900% : 55166.047us 00:10:06.794 99.99990% : 55166.047us 00:10:06.794 99.99999% : 55166.047us 00:10:06.794 00:10:06.794 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:06.794 ================================================================================= 00:10:06.794 1.00000% : 8527.576us 00:10:06.794 10.00000% : 8738.133us 00:10:06.794 25.00000% : 8896.051us 00:10:06.794 50.00000% : 9159.248us 00:10:06.794 75.00000% : 9369.806us 00:10:06.794 90.00000% : 9580.363us 00:10:06.794 95.00000% : 9790.920us 00:10:06.794 98.00000% : 10738.429us 00:10:06.794 99.00000% : 12686.085us 00:10:06.794 99.50000% : 45269.847us 00:10:06.794 99.90000% : 53060.472us 00:10:06.794 99.99000% : 53481.587us 00:10:06.794 99.99900% : 53481.587us 00:10:06.794 99.99990% : 53481.587us 00:10:06.794 99.99999% : 53481.587us 00:10:06.794 00:10:06.794 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:06.794 ================================================================================= 00:10:06.794 1.00000% : 8527.576us 00:10:06.794 10.00000% : 8738.133us 00:10:06.794 25.00000% : 8896.051us 00:10:06.794 50.00000% : 9159.248us 00:10:06.794 75.00000% : 9369.806us 00:10:06.794 90.00000% : 9633.002us 00:10:06.794 95.00000% : 9790.920us 00:10:06.794 98.00000% : 10738.429us 00:10:06.794 99.00000% : 12633.446us 00:10:06.794 99.50000% : 43795.945us 00:10:06.794 99.90000% : 50954.898us 00:10:06.794 99.99000% : 51376.013us 00:10:06.794 99.99900% : 51376.013us 00:10:06.794 99.99990% : 51376.013us 00:10:06.794 99.99999% : 51376.013us 00:10:06.794 00:10:06.794 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:06.794 ================================================================================= 00:10:06.794 1.00000% : 8474.937us 00:10:06.794 10.00000% : 8738.133us 00:10:06.794 25.00000% : 8896.051us 00:10:06.794 50.00000% : 9159.248us 00:10:06.794 75.00000% : 9369.806us 00:10:06.794 90.00000% : 9580.363us 00:10:06.794 95.00000% : 9790.920us 00:10:06.794 98.00000% : 10896.347us 00:10:06.794 99.00000% : 12949.282us 00:10:06.794 99.50000% : 42111.486us 00:10:06.794 99.90000% : 49059.881us 00:10:06.794 99.99000% : 49480.996us 00:10:06.794 99.99900% : 49480.996us 00:10:06.794 99.99990% : 49480.996us 00:10:06.794 99.99999% : 49480.996us 00:10:06.794 00:10:06.794 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:06.794 ================================================================================= 00:10:06.794 1.00000% : 8527.576us 00:10:06.794 10.00000% : 8685.494us 00:10:06.794 25.00000% : 8896.051us 00:10:06.794 50.00000% : 9159.248us 00:10:06.794 75.00000% : 9369.806us 00:10:06.794 90.00000% : 9633.002us 00:10:06.794 95.00000% : 9790.920us 00:10:06.795 98.00000% : 11528.019us 00:10:06.795 99.00000% : 13317.757us 00:10:06.795 99.50000% : 32846.959us 00:10:06.795 99.90000% : 41058.699us 00:10:06.795 99.99000% : 41479.814us 00:10:06.795 99.99900% : 41479.814us 00:10:06.795 99.99990% : 41479.814us 00:10:06.795 99.99999% : 41479.814us 00:10:06.795 00:10:06.795 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:06.795 ============================================================================== 00:10:06.795 Range in us Cumulative IO count 00:10:06.795 8211.740 - 8264.379: 0.0149% ( 2) 00:10:06.795 8264.379 - 8317.018: 0.1339% ( 16) 00:10:06.795 8317.018 - 8369.658: 0.4390% ( 41) 00:10:06.795 8369.658 - 8422.297: 1.0640% ( 84) 00:10:06.795 8422.297 - 8474.937: 2.4479% ( 186) 00:10:06.795 8474.937 - 8527.576: 4.5387% ( 281) 00:10:06.795 8527.576 - 8580.215: 7.1801% ( 355) 00:10:06.795 8580.215 - 8632.855: 10.3497% ( 426) 00:10:06.795 8632.855 - 8685.494: 13.7872% ( 462) 00:10:06.795 8685.494 - 8738.133: 17.5521% ( 506) 00:10:06.795 8738.133 - 8790.773: 21.6295% ( 548) 00:10:06.795 8790.773 - 8843.412: 25.9003% ( 574) 00:10:06.795 8843.412 - 8896.051: 30.1935% ( 577) 00:10:06.795 8896.051 - 8948.691: 34.9033% ( 633) 00:10:06.795 8948.691 - 9001.330: 39.6801% ( 642) 00:10:06.795 9001.330 - 9053.969: 44.4345% ( 639) 00:10:06.795 9053.969 - 9106.609: 49.4420% ( 673) 00:10:06.795 9106.609 - 9159.248: 54.3006% ( 653) 00:10:06.795 9159.248 - 9211.888: 59.2039% ( 659) 00:10:06.795 9211.888 - 9264.527: 64.1443% ( 664) 00:10:06.795 9264.527 - 9317.166: 68.9583% ( 647) 00:10:06.795 9317.166 - 9369.806: 73.6830% ( 635) 00:10:06.795 9369.806 - 9422.445: 78.2664% ( 616) 00:10:06.795 9422.445 - 9475.084: 81.9494% ( 495) 00:10:06.795 9475.084 - 9527.724: 85.1190% ( 426) 00:10:06.795 9527.724 - 9580.363: 87.7753% ( 357) 00:10:06.795 9580.363 - 9633.002: 90.0000% ( 299) 00:10:06.795 9633.002 - 9685.642: 91.9196% ( 258) 00:10:06.795 9685.642 - 9738.281: 93.3482% ( 192) 00:10:06.795 9738.281 - 9790.920: 94.6131% ( 170) 00:10:06.795 9790.920 - 9843.560: 95.5208% ( 122) 00:10:06.795 9843.560 - 9896.199: 96.1533% ( 85) 00:10:06.795 9896.199 - 9948.839: 96.7560% ( 81) 00:10:06.795 9948.839 - 10001.478: 97.0387% ( 38) 00:10:06.795 10001.478 - 10054.117: 97.2619% ( 30) 00:10:06.795 10054.117 - 10106.757: 97.4107% ( 20) 00:10:06.795 10106.757 - 10159.396: 97.4554% ( 6) 00:10:06.795 10159.396 - 10212.035: 97.5149% ( 8) 00:10:06.795 10212.035 - 10264.675: 97.5372% ( 3) 00:10:06.795 10264.675 - 10317.314: 97.5670% ( 4) 00:10:06.795 10317.314 - 10369.953: 97.6339% ( 9) 00:10:06.795 10369.953 - 10422.593: 97.6637% ( 4) 00:10:06.795 10422.593 - 10475.232: 97.7009% ( 5) 00:10:06.795 10475.232 - 10527.871: 97.7232% ( 3) 00:10:06.795 10527.871 - 10580.511: 97.7530% ( 4) 00:10:06.795 10580.511 - 10633.150: 97.7902% ( 5) 00:10:06.795 10633.150 - 10685.790: 97.8348% ( 6) 00:10:06.795 10685.790 - 10738.429: 97.8646% ( 4) 00:10:06.795 10738.429 - 10791.068: 97.9018% ( 5) 00:10:06.795 10791.068 - 10843.708: 97.9315% ( 4) 00:10:06.795 10843.708 - 10896.347: 97.9762% ( 6) 00:10:06.795 10896.347 - 10948.986: 98.0134% ( 5) 00:10:06.795 10948.986 - 11001.626: 98.0506% ( 5) 00:10:06.795 11001.626 - 11054.265: 98.0952% ( 6) 00:10:06.795 11054.265 - 11106.904: 98.1324% ( 5) 00:10:06.795 11106.904 - 11159.544: 98.1696% ( 5) 00:10:06.795 11159.544 - 11212.183: 98.1994% ( 4) 00:10:06.795 11212.183 - 11264.822: 98.2366% ( 5) 00:10:06.795 11264.822 - 11317.462: 98.2812% ( 6) 00:10:06.795 11317.462 - 11370.101: 98.3036% ( 3) 00:10:06.795 11370.101 - 11422.741: 98.3482% ( 6) 00:10:06.795 11422.741 - 11475.380: 98.3780% ( 4) 00:10:06.795 11475.380 - 11528.019: 98.3929% ( 2) 00:10:06.795 11528.019 - 11580.659: 98.4077% ( 2) 00:10:06.795 11580.659 - 11633.298: 98.4226% ( 2) 00:10:06.795 11633.298 - 11685.937: 98.4301% ( 1) 00:10:06.795 11685.937 - 11738.577: 98.4673% ( 5) 00:10:06.795 11738.577 - 11791.216: 98.4747% ( 1) 00:10:06.795 11791.216 - 11843.855: 98.4896% ( 2) 00:10:06.795 11843.855 - 11896.495: 98.5045% ( 2) 00:10:06.795 11896.495 - 11949.134: 98.5119% ( 1) 00:10:06.795 11949.134 - 12001.773: 98.5342% ( 3) 00:10:06.795 12001.773 - 12054.413: 98.5565% ( 3) 00:10:06.795 12054.413 - 12107.052: 98.5714% ( 2) 00:10:06.795 12475.528 - 12528.167: 98.5789% ( 1) 00:10:06.795 12528.167 - 12580.806: 98.6012% ( 3) 00:10:06.795 12580.806 - 12633.446: 98.6235% ( 3) 00:10:06.795 12633.446 - 12686.085: 98.6533% ( 4) 00:10:06.795 12686.085 - 12738.724: 98.6756% ( 3) 00:10:06.795 12738.724 - 12791.364: 98.6979% ( 3) 00:10:06.795 12791.364 - 12844.003: 98.7202% ( 3) 00:10:06.795 12844.003 - 12896.643: 98.7426% ( 3) 00:10:06.795 12896.643 - 12949.282: 98.7723% ( 4) 00:10:06.795 12949.282 - 13001.921: 98.7946% ( 3) 00:10:06.795 13001.921 - 13054.561: 98.8244% ( 4) 00:10:06.795 13054.561 - 13107.200: 98.8467% ( 3) 00:10:06.795 13107.200 - 13159.839: 98.8616% ( 2) 00:10:06.795 13159.839 - 13212.479: 98.8914% ( 4) 00:10:06.795 13212.479 - 13265.118: 98.9062% ( 2) 00:10:06.795 13265.118 - 13317.757: 98.9360% ( 4) 00:10:06.795 13317.757 - 13370.397: 98.9583% ( 3) 00:10:06.795 13370.397 - 13423.036: 98.9807% ( 3) 00:10:06.795 13423.036 - 13475.676: 99.0104% ( 4) 00:10:06.795 13475.676 - 13580.954: 99.0476% ( 5) 00:10:06.795 46112.077 - 46322.635: 99.0625% ( 2) 00:10:06.795 46322.635 - 46533.192: 99.1146% ( 7) 00:10:06.795 46533.192 - 46743.749: 99.1667% ( 7) 00:10:06.795 46743.749 - 46954.307: 99.2188% ( 7) 00:10:06.795 46954.307 - 47164.864: 99.2708% ( 7) 00:10:06.795 47164.864 - 47375.422: 99.3229% ( 7) 00:10:06.795 47375.422 - 47585.979: 99.3824% ( 8) 00:10:06.795 47585.979 - 47796.537: 99.4345% ( 7) 00:10:06.795 47796.537 - 48007.094: 99.4792% ( 6) 00:10:06.795 48007.094 - 48217.651: 99.5238% ( 6) 00:10:06.795 54744.932 - 55166.047: 99.5685% ( 6) 00:10:06.795 55166.047 - 55587.161: 99.6726% ( 14) 00:10:06.795 55587.161 - 56008.276: 99.7768% ( 14) 00:10:06.795 56008.276 - 56429.391: 99.8810% ( 14) 00:10:06.795 56429.391 - 56850.506: 99.9777% ( 13) 00:10:06.795 56850.506 - 57271.621: 100.0000% ( 3) 00:10:06.795 00:10:06.795 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:06.795 ============================================================================== 00:10:06.795 Range in us Cumulative IO count 00:10:06.795 8317.018 - 8369.658: 0.0446% ( 6) 00:10:06.795 8369.658 - 8422.297: 0.3199% ( 37) 00:10:06.795 8422.297 - 8474.937: 0.8333% ( 69) 00:10:06.795 8474.937 - 8527.576: 1.7857% ( 128) 00:10:06.795 8527.576 - 8580.215: 3.5938% ( 243) 00:10:06.795 8580.215 - 8632.855: 6.5699% ( 400) 00:10:06.795 8632.855 - 8685.494: 9.7321% ( 425) 00:10:06.795 8685.494 - 8738.133: 13.9658% ( 569) 00:10:06.795 8738.133 - 8790.773: 18.1027% ( 556) 00:10:06.795 8790.773 - 8843.412: 22.7158% ( 620) 00:10:06.795 8843.412 - 8896.051: 27.7604% ( 678) 00:10:06.795 8896.051 - 8948.691: 32.7902% ( 676) 00:10:06.795 8948.691 - 9001.330: 38.0804% ( 711) 00:10:06.795 9001.330 - 9053.969: 43.6458% ( 748) 00:10:06.795 9053.969 - 9106.609: 49.4420% ( 779) 00:10:06.795 9106.609 - 9159.248: 55.3943% ( 800) 00:10:06.795 9159.248 - 9211.888: 61.2500% ( 787) 00:10:06.795 9211.888 - 9264.527: 66.9345% ( 764) 00:10:06.795 9264.527 - 9317.166: 72.3438% ( 727) 00:10:06.795 9317.166 - 9369.806: 77.2619% ( 661) 00:10:06.795 9369.806 - 9422.445: 81.2649% ( 538) 00:10:06.795 9422.445 - 9475.084: 84.6429% ( 454) 00:10:06.795 9475.084 - 9527.724: 87.5223% ( 387) 00:10:06.795 9527.724 - 9580.363: 89.8214% ( 309) 00:10:06.795 9580.363 - 9633.002: 91.7262% ( 256) 00:10:06.795 9633.002 - 9685.642: 93.1920% ( 197) 00:10:06.795 9685.642 - 9738.281: 94.4643% ( 171) 00:10:06.795 9738.281 - 9790.920: 95.5060% ( 140) 00:10:06.795 9790.920 - 9843.560: 96.3244% ( 110) 00:10:06.795 9843.560 - 9896.199: 96.8676% ( 73) 00:10:06.795 9896.199 - 9948.839: 97.1429% ( 37) 00:10:06.795 9948.839 - 10001.478: 97.2991% ( 21) 00:10:06.795 10001.478 - 10054.117: 97.3958% ( 13) 00:10:06.795 10054.117 - 10106.757: 97.4479% ( 7) 00:10:06.795 10106.757 - 10159.396: 97.4926% ( 6) 00:10:06.795 10159.396 - 10212.035: 97.5298% ( 5) 00:10:06.795 10212.035 - 10264.675: 97.5521% ( 3) 00:10:06.795 10264.675 - 10317.314: 97.5893% ( 5) 00:10:06.795 10317.314 - 10369.953: 97.6414% ( 7) 00:10:06.796 10369.953 - 10422.593: 97.6786% ( 5) 00:10:06.796 10422.593 - 10475.232: 97.7232% ( 6) 00:10:06.796 10475.232 - 10527.871: 97.7530% ( 4) 00:10:06.796 10527.871 - 10580.511: 97.7827% ( 4) 00:10:06.796 10580.511 - 10633.150: 97.8125% ( 4) 00:10:06.796 10633.150 - 10685.790: 97.8348% ( 3) 00:10:06.796 10685.790 - 10738.429: 97.8646% ( 4) 00:10:06.796 10738.429 - 10791.068: 97.8943% ( 4) 00:10:06.796 10791.068 - 10843.708: 97.9167% ( 3) 00:10:06.796 10843.708 - 10896.347: 97.9464% ( 4) 00:10:06.796 10896.347 - 10948.986: 97.9762% ( 4) 00:10:06.796 10948.986 - 11001.626: 98.0060% ( 4) 00:10:06.796 11001.626 - 11054.265: 98.0357% ( 4) 00:10:06.796 11054.265 - 11106.904: 98.0729% ( 5) 00:10:06.796 11106.904 - 11159.544: 98.1250% ( 7) 00:10:06.796 11159.544 - 11212.183: 98.1473% ( 3) 00:10:06.796 11212.183 - 11264.822: 98.1622% ( 2) 00:10:06.796 11264.822 - 11317.462: 98.1845% ( 3) 00:10:06.796 11317.462 - 11370.101: 98.1994% ( 2) 00:10:06.796 11370.101 - 11422.741: 98.2292% ( 4) 00:10:06.796 11422.741 - 11475.380: 98.2366% ( 1) 00:10:06.796 11475.380 - 11528.019: 98.2589% ( 3) 00:10:06.796 11528.019 - 11580.659: 98.2738% ( 2) 00:10:06.796 11580.659 - 11633.298: 98.2887% ( 2) 00:10:06.796 11633.298 - 11685.937: 98.3036% ( 2) 00:10:06.796 11685.937 - 11738.577: 98.3259% ( 3) 00:10:06.796 11738.577 - 11791.216: 98.3408% ( 2) 00:10:06.796 11791.216 - 11843.855: 98.3557% ( 2) 00:10:06.796 11843.855 - 11896.495: 98.3780% ( 3) 00:10:06.796 11896.495 - 11949.134: 98.3929% ( 2) 00:10:06.796 11949.134 - 12001.773: 98.4077% ( 2) 00:10:06.796 12001.773 - 12054.413: 98.4301% ( 3) 00:10:06.796 12054.413 - 12107.052: 98.4524% ( 3) 00:10:06.796 12107.052 - 12159.692: 98.4673% ( 2) 00:10:06.796 12159.692 - 12212.331: 98.4896% ( 3) 00:10:06.796 12212.331 - 12264.970: 98.5342% ( 6) 00:10:06.796 12264.970 - 12317.610: 98.5789% ( 6) 00:10:06.796 12317.610 - 12370.249: 98.6235% ( 6) 00:10:06.796 12370.249 - 12422.888: 98.6682% ( 6) 00:10:06.796 12422.888 - 12475.528: 98.7054% ( 5) 00:10:06.796 12475.528 - 12528.167: 98.7351% ( 4) 00:10:06.796 12528.167 - 12580.806: 98.7649% ( 4) 00:10:06.796 12580.806 - 12633.446: 98.7946% ( 4) 00:10:06.796 12633.446 - 12686.085: 98.8244% ( 4) 00:10:06.796 12686.085 - 12738.724: 98.8542% ( 4) 00:10:06.796 12738.724 - 12791.364: 98.8839% ( 4) 00:10:06.796 12791.364 - 12844.003: 98.9137% ( 4) 00:10:06.796 12844.003 - 12896.643: 98.9435% ( 4) 00:10:06.796 12896.643 - 12949.282: 98.9732% ( 4) 00:10:06.796 12949.282 - 13001.921: 99.0030% ( 4) 00:10:06.796 13001.921 - 13054.561: 99.0402% ( 5) 00:10:06.796 13054.561 - 13107.200: 99.0476% ( 1) 00:10:06.796 44427.618 - 44638.175: 99.0625% ( 2) 00:10:06.796 44638.175 - 44848.733: 99.0997% ( 5) 00:10:06.796 44848.733 - 45059.290: 99.1518% ( 7) 00:10:06.796 45059.290 - 45269.847: 99.2113% ( 8) 00:10:06.796 45269.847 - 45480.405: 99.2634% ( 7) 00:10:06.796 45480.405 - 45690.962: 99.3229% ( 8) 00:10:06.796 45690.962 - 45901.520: 99.3750% ( 7) 00:10:06.796 45901.520 - 46112.077: 99.4271% ( 7) 00:10:06.796 46112.077 - 46322.635: 99.4792% ( 7) 00:10:06.796 46322.635 - 46533.192: 99.5238% ( 6) 00:10:06.796 52849.915 - 53060.472: 99.5461% ( 3) 00:10:06.796 53060.472 - 53271.030: 99.5982% ( 7) 00:10:06.796 53271.030 - 53481.587: 99.6503% ( 7) 00:10:06.796 53481.587 - 53692.145: 99.7098% ( 8) 00:10:06.796 53692.145 - 53902.702: 99.7545% ( 6) 00:10:06.796 53902.702 - 54323.817: 99.8661% ( 15) 00:10:06.796 54323.817 - 54744.932: 99.9777% ( 15) 00:10:06.796 54744.932 - 55166.047: 100.0000% ( 3) 00:10:06.796 00:10:06.796 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:06.796 ============================================================================== 00:10:06.796 Range in us Cumulative IO count 00:10:06.796 8264.379 - 8317.018: 0.0074% ( 1) 00:10:06.796 8317.018 - 8369.658: 0.0446% ( 5) 00:10:06.796 8369.658 - 8422.297: 0.1637% ( 16) 00:10:06.796 8422.297 - 8474.937: 0.8557% ( 93) 00:10:06.796 8474.937 - 8527.576: 2.0610% ( 162) 00:10:06.796 8527.576 - 8580.215: 3.8616% ( 242) 00:10:06.796 8580.215 - 8632.855: 6.4435% ( 347) 00:10:06.796 8632.855 - 8685.494: 9.8661% ( 460) 00:10:06.796 8685.494 - 8738.133: 13.8318% ( 533) 00:10:06.796 8738.133 - 8790.773: 18.0283% ( 564) 00:10:06.796 8790.773 - 8843.412: 22.6414% ( 620) 00:10:06.796 8843.412 - 8896.051: 27.5000% ( 653) 00:10:06.796 8896.051 - 8948.691: 32.5372% ( 677) 00:10:06.796 8948.691 - 9001.330: 38.0208% ( 737) 00:10:06.796 9001.330 - 9053.969: 43.7351% ( 768) 00:10:06.796 9053.969 - 9106.609: 49.6429% ( 794) 00:10:06.796 9106.609 - 9159.248: 55.4762% ( 784) 00:10:06.796 9159.248 - 9211.888: 61.3988% ( 796) 00:10:06.796 9211.888 - 9264.527: 67.1503% ( 773) 00:10:06.796 9264.527 - 9317.166: 72.5818% ( 730) 00:10:06.796 9317.166 - 9369.806: 77.3735% ( 644) 00:10:06.796 9369.806 - 9422.445: 81.4732% ( 551) 00:10:06.796 9422.445 - 9475.084: 84.9107% ( 462) 00:10:06.796 9475.084 - 9527.724: 87.8125% ( 390) 00:10:06.796 9527.724 - 9580.363: 90.0446% ( 300) 00:10:06.796 9580.363 - 9633.002: 91.9048% ( 250) 00:10:06.796 9633.002 - 9685.642: 93.4673% ( 210) 00:10:06.796 9685.642 - 9738.281: 94.7917% ( 178) 00:10:06.796 9738.281 - 9790.920: 95.8036% ( 136) 00:10:06.796 9790.920 - 9843.560: 96.5551% ( 101) 00:10:06.796 9843.560 - 9896.199: 96.9717% ( 56) 00:10:06.796 9896.199 - 9948.839: 97.1726% ( 27) 00:10:06.796 9948.839 - 10001.478: 97.3289% ( 21) 00:10:06.796 10001.478 - 10054.117: 97.4554% ( 17) 00:10:06.796 10054.117 - 10106.757: 97.5446% ( 12) 00:10:06.796 10106.757 - 10159.396: 97.6042% ( 8) 00:10:06.796 10159.396 - 10212.035: 97.6637% ( 8) 00:10:06.796 10212.035 - 10264.675: 97.7083% ( 6) 00:10:06.796 10264.675 - 10317.314: 97.7604% ( 7) 00:10:06.796 10317.314 - 10369.953: 97.7976% ( 5) 00:10:06.796 10369.953 - 10422.593: 97.8423% ( 6) 00:10:06.796 10422.593 - 10475.232: 97.8646% ( 3) 00:10:06.796 10475.232 - 10527.871: 97.8943% ( 4) 00:10:06.796 10527.871 - 10580.511: 97.9241% ( 4) 00:10:06.796 10580.511 - 10633.150: 97.9464% ( 3) 00:10:06.796 10633.150 - 10685.790: 97.9762% ( 4) 00:10:06.796 10685.790 - 10738.429: 98.0060% ( 4) 00:10:06.796 10738.429 - 10791.068: 98.0357% ( 4) 00:10:06.796 10791.068 - 10843.708: 98.0655% ( 4) 00:10:06.796 10843.708 - 10896.347: 98.0878% ( 3) 00:10:06.796 10896.347 - 10948.986: 98.1027% ( 2) 00:10:06.796 10948.986 - 11001.626: 98.1176% ( 2) 00:10:06.796 11001.626 - 11054.265: 98.1399% ( 3) 00:10:06.796 11054.265 - 11106.904: 98.1622% ( 3) 00:10:06.796 11106.904 - 11159.544: 98.1771% ( 2) 00:10:06.796 11159.544 - 11212.183: 98.1920% ( 2) 00:10:06.796 11212.183 - 11264.822: 98.2068% ( 2) 00:10:06.796 11264.822 - 11317.462: 98.2217% ( 2) 00:10:06.796 11317.462 - 11370.101: 98.2366% ( 2) 00:10:06.796 11370.101 - 11422.741: 98.2589% ( 3) 00:10:06.796 11422.741 - 11475.380: 98.2738% ( 2) 00:10:06.796 11475.380 - 11528.019: 98.2887% ( 2) 00:10:06.796 11528.019 - 11580.659: 98.3036% ( 2) 00:10:06.796 11580.659 - 11633.298: 98.3185% ( 2) 00:10:06.796 11633.298 - 11685.937: 98.3333% ( 2) 00:10:06.796 11685.937 - 11738.577: 98.3557% ( 3) 00:10:06.796 11738.577 - 11791.216: 98.3780% ( 3) 00:10:06.796 11791.216 - 11843.855: 98.4152% ( 5) 00:10:06.796 11843.855 - 11896.495: 98.4524% ( 5) 00:10:06.796 11896.495 - 11949.134: 98.5119% ( 8) 00:10:06.796 11949.134 - 12001.773: 98.5565% ( 6) 00:10:06.796 12001.773 - 12054.413: 98.6012% ( 6) 00:10:06.796 12054.413 - 12107.052: 98.6458% ( 6) 00:10:06.796 12107.052 - 12159.692: 98.6979% ( 7) 00:10:06.796 12159.692 - 12212.331: 98.7351% ( 5) 00:10:06.796 12212.331 - 12264.970: 98.7872% ( 7) 00:10:06.796 12264.970 - 12317.610: 98.8318% ( 6) 00:10:06.796 12317.610 - 12370.249: 98.8616% ( 4) 00:10:06.796 12370.249 - 12422.888: 98.8839% ( 3) 00:10:06.796 12422.888 - 12475.528: 98.9137% ( 4) 00:10:06.796 12475.528 - 12528.167: 98.9360% ( 3) 00:10:06.796 12528.167 - 12580.806: 98.9658% ( 4) 00:10:06.796 12580.806 - 12633.446: 98.9955% ( 4) 00:10:06.796 12633.446 - 12686.085: 99.0179% ( 3) 00:10:06.796 12686.085 - 12738.724: 99.0476% ( 4) 00:10:06.796 43374.831 - 43585.388: 99.0774% ( 4) 00:10:06.796 43585.388 - 43795.945: 99.1369% ( 8) 00:10:06.796 43795.945 - 44006.503: 99.1964% ( 8) 00:10:06.796 44006.503 - 44217.060: 99.2485% ( 7) 00:10:06.796 44217.060 - 44427.618: 99.3080% ( 8) 00:10:06.796 44427.618 - 44638.175: 99.3676% ( 8) 00:10:06.796 44638.175 - 44848.733: 99.4196% ( 7) 00:10:06.796 44848.733 - 45059.290: 99.4717% ( 7) 00:10:06.796 45059.290 - 45269.847: 99.5238% ( 7) 00:10:06.796 51376.013 - 51586.570: 99.5759% ( 7) 00:10:06.796 51586.570 - 51797.128: 99.6354% ( 8) 00:10:06.796 51797.128 - 52007.685: 99.6875% ( 7) 00:10:06.796 52007.685 - 52218.243: 99.7396% ( 7) 00:10:06.796 52218.243 - 52428.800: 99.7917% ( 7) 00:10:06.796 52428.800 - 52639.357: 99.8363% ( 6) 00:10:06.796 52639.357 - 52849.915: 99.8810% ( 6) 00:10:06.796 52849.915 - 53060.472: 99.9256% ( 6) 00:10:06.796 53060.472 - 53271.030: 99.9702% ( 6) 00:10:06.796 53271.030 - 53481.587: 100.0000% ( 4) 00:10:06.796 00:10:06.796 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:06.796 ============================================================================== 00:10:06.796 Range in us Cumulative IO count 00:10:06.796 8264.379 - 8317.018: 0.0298% ( 4) 00:10:06.796 8317.018 - 8369.658: 0.0670% ( 5) 00:10:06.796 8369.658 - 8422.297: 0.2381% ( 23) 00:10:06.796 8422.297 - 8474.937: 0.8482% ( 82) 00:10:06.796 8474.937 - 8527.576: 1.9568% ( 149) 00:10:06.796 8527.576 - 8580.215: 3.7946% ( 247) 00:10:06.796 8580.215 - 8632.855: 6.4807% ( 361) 00:10:06.797 8632.855 - 8685.494: 9.7768% ( 443) 00:10:06.797 8685.494 - 8738.133: 13.7574% ( 535) 00:10:06.797 8738.133 - 8790.773: 18.2366% ( 602) 00:10:06.797 8790.773 - 8843.412: 22.7604% ( 608) 00:10:06.797 8843.412 - 8896.051: 27.5818% ( 648) 00:10:06.797 8896.051 - 8948.691: 32.7455% ( 694) 00:10:06.797 8948.691 - 9001.330: 38.2440% ( 739) 00:10:06.797 9001.330 - 9053.969: 43.9658% ( 769) 00:10:06.797 9053.969 - 9106.609: 49.6057% ( 758) 00:10:06.797 9106.609 - 9159.248: 55.5357% ( 797) 00:10:06.797 9159.248 - 9211.888: 61.4062% ( 789) 00:10:06.797 9211.888 - 9264.527: 67.0536% ( 759) 00:10:06.797 9264.527 - 9317.166: 72.4182% ( 721) 00:10:06.797 9317.166 - 9369.806: 77.2024% ( 643) 00:10:06.797 9369.806 - 9422.445: 81.3095% ( 552) 00:10:06.797 9422.445 - 9475.084: 84.8289% ( 473) 00:10:06.797 9475.084 - 9527.724: 87.6339% ( 377) 00:10:06.797 9527.724 - 9580.363: 89.9851% ( 316) 00:10:06.797 9580.363 - 9633.002: 91.9122% ( 259) 00:10:06.797 9633.002 - 9685.642: 93.4524% ( 207) 00:10:06.797 9685.642 - 9738.281: 94.7693% ( 177) 00:10:06.797 9738.281 - 9790.920: 95.7292% ( 129) 00:10:06.797 9790.920 - 9843.560: 96.3616% ( 85) 00:10:06.797 9843.560 - 9896.199: 96.8155% ( 61) 00:10:06.797 9896.199 - 9948.839: 97.1354% ( 43) 00:10:06.797 9948.839 - 10001.478: 97.3140% ( 24) 00:10:06.797 10001.478 - 10054.117: 97.4256% ( 15) 00:10:06.797 10054.117 - 10106.757: 97.5000% ( 10) 00:10:06.797 10106.757 - 10159.396: 97.5446% ( 6) 00:10:06.797 10159.396 - 10212.035: 97.5818% ( 5) 00:10:06.797 10212.035 - 10264.675: 97.6339% ( 7) 00:10:06.797 10264.675 - 10317.314: 97.6786% ( 6) 00:10:06.797 10317.314 - 10369.953: 97.7158% ( 5) 00:10:06.797 10369.953 - 10422.593: 97.7604% ( 6) 00:10:06.797 10422.593 - 10475.232: 97.8051% ( 6) 00:10:06.797 10475.232 - 10527.871: 97.8423% ( 5) 00:10:06.797 10527.871 - 10580.511: 97.8795% ( 5) 00:10:06.797 10580.511 - 10633.150: 97.9241% ( 6) 00:10:06.797 10633.150 - 10685.790: 97.9688% ( 6) 00:10:06.797 10685.790 - 10738.429: 98.0060% ( 5) 00:10:06.797 10738.429 - 10791.068: 98.0580% ( 7) 00:10:06.797 10791.068 - 10843.708: 98.0804% ( 3) 00:10:06.797 10843.708 - 10896.347: 98.0952% ( 2) 00:10:06.797 11264.822 - 11317.462: 98.1027% ( 1) 00:10:06.797 11317.462 - 11370.101: 98.1399% ( 5) 00:10:06.797 11370.101 - 11422.741: 98.1920% ( 7) 00:10:06.797 11422.741 - 11475.380: 98.2292% ( 5) 00:10:06.797 11475.380 - 11528.019: 98.2738% ( 6) 00:10:06.797 11528.019 - 11580.659: 98.3482% ( 10) 00:10:06.797 11580.659 - 11633.298: 98.4077% ( 8) 00:10:06.797 11633.298 - 11685.937: 98.4449% ( 5) 00:10:06.797 11685.937 - 11738.577: 98.4821% ( 5) 00:10:06.797 11738.577 - 11791.216: 98.5119% ( 4) 00:10:06.797 11791.216 - 11843.855: 98.5565% ( 6) 00:10:06.797 11843.855 - 11896.495: 98.5863% ( 4) 00:10:06.797 11896.495 - 11949.134: 98.6310% ( 6) 00:10:06.797 11949.134 - 12001.773: 98.6756% ( 6) 00:10:06.797 12001.773 - 12054.413: 98.7202% ( 6) 00:10:06.797 12054.413 - 12107.052: 98.7649% ( 6) 00:10:06.797 12107.052 - 12159.692: 98.8095% ( 6) 00:10:06.797 12159.692 - 12212.331: 98.8765% ( 9) 00:10:06.797 12212.331 - 12264.970: 98.9137% ( 5) 00:10:06.797 12264.970 - 12317.610: 98.9286% ( 2) 00:10:06.797 12317.610 - 12370.249: 98.9509% ( 3) 00:10:06.797 12370.249 - 12422.888: 98.9583% ( 1) 00:10:06.797 12422.888 - 12475.528: 98.9732% ( 2) 00:10:06.797 12475.528 - 12528.167: 98.9807% ( 1) 00:10:06.797 12528.167 - 12580.806: 98.9955% ( 2) 00:10:06.797 12580.806 - 12633.446: 99.0179% ( 3) 00:10:06.797 12633.446 - 12686.085: 99.0402% ( 3) 00:10:06.797 12686.085 - 12738.724: 99.0476% ( 1) 00:10:06.797 41690.371 - 41900.929: 99.0699% ( 3) 00:10:06.797 41900.929 - 42111.486: 99.1220% ( 7) 00:10:06.797 42111.486 - 42322.043: 99.1741% ( 7) 00:10:06.797 42322.043 - 42532.601: 99.2262% ( 7) 00:10:06.797 42532.601 - 42743.158: 99.2783% ( 7) 00:10:06.797 42743.158 - 42953.716: 99.3304% ( 7) 00:10:06.797 42953.716 - 43164.273: 99.3899% ( 8) 00:10:06.797 43164.273 - 43374.831: 99.4420% ( 7) 00:10:06.797 43374.831 - 43585.388: 99.4940% ( 7) 00:10:06.797 43585.388 - 43795.945: 99.5238% ( 4) 00:10:06.797 49270.439 - 49480.996: 99.5312% ( 1) 00:10:06.797 49480.996 - 49691.553: 99.5833% ( 7) 00:10:06.797 49691.553 - 49902.111: 99.6429% ( 8) 00:10:06.797 49902.111 - 50112.668: 99.6949% ( 7) 00:10:06.797 50112.668 - 50323.226: 99.7470% ( 7) 00:10:06.797 50323.226 - 50533.783: 99.7991% ( 7) 00:10:06.797 50533.783 - 50744.341: 99.8586% ( 8) 00:10:06.797 50744.341 - 50954.898: 99.9182% ( 8) 00:10:06.797 50954.898 - 51165.455: 99.9702% ( 7) 00:10:06.797 51165.455 - 51376.013: 100.0000% ( 4) 00:10:06.797 00:10:06.797 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:06.797 ============================================================================== 00:10:06.797 Range in us Cumulative IO count 00:10:06.797 8317.018 - 8369.658: 0.0298% ( 4) 00:10:06.797 8369.658 - 8422.297: 0.2753% ( 33) 00:10:06.797 8422.297 - 8474.937: 1.0565% ( 105) 00:10:06.797 8474.937 - 8527.576: 2.1652% ( 149) 00:10:06.797 8527.576 - 8580.215: 4.0030% ( 247) 00:10:06.797 8580.215 - 8632.855: 6.5551% ( 343) 00:10:06.797 8632.855 - 8685.494: 9.8512% ( 443) 00:10:06.797 8685.494 - 8738.133: 13.7649% ( 526) 00:10:06.797 8738.133 - 8790.773: 18.1176% ( 585) 00:10:06.797 8790.773 - 8843.412: 22.7158% ( 618) 00:10:06.797 8843.412 - 8896.051: 27.7455% ( 676) 00:10:06.797 8896.051 - 8948.691: 32.9688% ( 702) 00:10:06.797 8948.691 - 9001.330: 38.3110% ( 718) 00:10:06.797 9001.330 - 9053.969: 43.9807% ( 762) 00:10:06.797 9053.969 - 9106.609: 49.7842% ( 780) 00:10:06.797 9106.609 - 9159.248: 55.5729% ( 778) 00:10:06.797 9159.248 - 9211.888: 61.4509% ( 790) 00:10:06.797 9211.888 - 9264.527: 67.0536% ( 753) 00:10:06.797 9264.527 - 9317.166: 72.2098% ( 693) 00:10:06.797 9317.166 - 9369.806: 77.0833% ( 655) 00:10:06.797 9369.806 - 9422.445: 81.2798% ( 564) 00:10:06.797 9422.445 - 9475.084: 84.8214% ( 476) 00:10:06.797 9475.084 - 9527.724: 87.7827% ( 398) 00:10:06.797 9527.724 - 9580.363: 90.0967% ( 311) 00:10:06.797 9580.363 - 9633.002: 91.8676% ( 238) 00:10:06.797 9633.002 - 9685.642: 93.4077% ( 207) 00:10:06.797 9685.642 - 9738.281: 94.6503% ( 167) 00:10:06.797 9738.281 - 9790.920: 95.5729% ( 124) 00:10:06.797 9790.920 - 9843.560: 96.2202% ( 87) 00:10:06.797 9843.560 - 9896.199: 96.6815% ( 62) 00:10:06.797 9896.199 - 9948.839: 96.9568% ( 37) 00:10:06.797 9948.839 - 10001.478: 97.1801% ( 30) 00:10:06.797 10001.478 - 10054.117: 97.3438% ( 22) 00:10:06.797 10054.117 - 10106.757: 97.4479% ( 14) 00:10:06.797 10106.757 - 10159.396: 97.4777% ( 4) 00:10:06.797 10159.396 - 10212.035: 97.5223% ( 6) 00:10:06.797 10212.035 - 10264.675: 97.5670% ( 6) 00:10:06.797 10264.675 - 10317.314: 97.6116% ( 6) 00:10:06.797 10317.314 - 10369.953: 97.6637% ( 7) 00:10:06.797 10369.953 - 10422.593: 97.7083% ( 6) 00:10:06.797 10422.593 - 10475.232: 97.7455% ( 5) 00:10:06.797 10475.232 - 10527.871: 97.7976% ( 7) 00:10:06.797 10527.871 - 10580.511: 97.8423% ( 6) 00:10:06.797 10580.511 - 10633.150: 97.8869% ( 6) 00:10:06.797 10633.150 - 10685.790: 97.9092% ( 3) 00:10:06.797 10685.790 - 10738.429: 97.9241% ( 2) 00:10:06.797 10738.429 - 10791.068: 97.9390% ( 2) 00:10:06.797 10791.068 - 10843.708: 97.9688% ( 4) 00:10:06.797 10843.708 - 10896.347: 98.0134% ( 6) 00:10:06.797 10896.347 - 10948.986: 98.0506% ( 5) 00:10:06.797 10948.986 - 11001.626: 98.0952% ( 6) 00:10:06.797 11001.626 - 11054.265: 98.1548% ( 8) 00:10:06.797 11054.265 - 11106.904: 98.1994% ( 6) 00:10:06.797 11106.904 - 11159.544: 98.2366% ( 5) 00:10:06.797 11159.544 - 11212.183: 98.2738% ( 5) 00:10:06.797 11212.183 - 11264.822: 98.3259% ( 7) 00:10:06.797 11264.822 - 11317.462: 98.3557% ( 4) 00:10:06.797 11317.462 - 11370.101: 98.3780% ( 3) 00:10:06.797 11370.101 - 11422.741: 98.4077% ( 4) 00:10:06.797 11422.741 - 11475.380: 98.4152% ( 1) 00:10:06.797 11475.380 - 11528.019: 98.4449% ( 4) 00:10:06.797 11528.019 - 11580.659: 98.4747% ( 4) 00:10:06.797 11580.659 - 11633.298: 98.4970% ( 3) 00:10:06.797 11633.298 - 11685.937: 98.5417% ( 6) 00:10:06.797 11685.937 - 11738.577: 98.5863% ( 6) 00:10:06.797 11738.577 - 11791.216: 98.6310% ( 6) 00:10:06.797 11791.216 - 11843.855: 98.6384% ( 1) 00:10:06.797 11843.855 - 11896.495: 98.6533% ( 2) 00:10:06.797 11896.495 - 11949.134: 98.6756% ( 3) 00:10:06.797 11949.134 - 12001.773: 98.6905% ( 2) 00:10:06.797 12001.773 - 12054.413: 98.7128% ( 3) 00:10:06.797 12054.413 - 12107.052: 98.7277% ( 2) 00:10:06.797 12107.052 - 12159.692: 98.7426% ( 2) 00:10:06.797 12159.692 - 12212.331: 98.7574% ( 2) 00:10:06.797 12212.331 - 12264.970: 98.7798% ( 3) 00:10:06.797 12264.970 - 12317.610: 98.7946% ( 2) 00:10:06.797 12317.610 - 12370.249: 98.8170% ( 3) 00:10:06.797 12370.249 - 12422.888: 98.8318% ( 2) 00:10:06.797 12422.888 - 12475.528: 98.8467% ( 2) 00:10:06.797 12475.528 - 12528.167: 98.8690% ( 3) 00:10:06.797 12528.167 - 12580.806: 98.8839% ( 2) 00:10:06.797 12580.806 - 12633.446: 98.9062% ( 3) 00:10:06.797 12633.446 - 12686.085: 98.9211% ( 2) 00:10:06.797 12686.085 - 12738.724: 98.9435% ( 3) 00:10:06.797 12738.724 - 12791.364: 98.9583% ( 2) 00:10:06.797 12791.364 - 12844.003: 98.9807% ( 3) 00:10:06.797 12844.003 - 12896.643: 98.9955% ( 2) 00:10:06.797 12896.643 - 12949.282: 99.0179% ( 3) 00:10:06.797 12949.282 - 13001.921: 99.0327% ( 2) 00:10:06.797 13001.921 - 13054.561: 99.0476% ( 2) 00:10:06.797 40005.912 - 40216.469: 99.0923% ( 6) 00:10:06.797 40216.469 - 40427.027: 99.1369% ( 6) 00:10:06.797 40427.027 - 40637.584: 99.1890% ( 7) 00:10:06.797 40637.584 - 40848.141: 99.2336% ( 6) 00:10:06.797 40848.141 - 41058.699: 99.2857% ( 7) 00:10:06.798 41058.699 - 41269.256: 99.3378% ( 7) 00:10:06.798 41269.256 - 41479.814: 99.3899% ( 7) 00:10:06.798 41479.814 - 41690.371: 99.4420% ( 7) 00:10:06.798 41690.371 - 41900.929: 99.4940% ( 7) 00:10:06.798 41900.929 - 42111.486: 99.5238% ( 4) 00:10:06.798 47375.422 - 47585.979: 99.5536% ( 4) 00:10:06.798 47585.979 - 47796.537: 99.6057% ( 7) 00:10:06.798 47796.537 - 48007.094: 99.6503% ( 6) 00:10:06.798 48007.094 - 48217.651: 99.7098% ( 8) 00:10:06.798 48217.651 - 48428.209: 99.7545% ( 6) 00:10:06.798 48428.209 - 48638.766: 99.8065% ( 7) 00:10:06.798 48638.766 - 48849.324: 99.8661% ( 8) 00:10:06.798 48849.324 - 49059.881: 99.9182% ( 7) 00:10:06.798 49059.881 - 49270.439: 99.9702% ( 7) 00:10:06.798 49270.439 - 49480.996: 100.0000% ( 4) 00:10:06.798 00:10:06.798 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:06.798 ============================================================================== 00:10:06.798 Range in us Cumulative IO count 00:10:06.798 8317.018 - 8369.658: 0.0666% ( 9) 00:10:06.798 8369.658 - 8422.297: 0.2962% ( 31) 00:10:06.798 8422.297 - 8474.937: 0.8812% ( 79) 00:10:06.798 8474.937 - 8527.576: 2.0661% ( 160) 00:10:06.798 8527.576 - 8580.215: 4.0358% ( 266) 00:10:06.798 8580.215 - 8632.855: 6.6129% ( 348) 00:10:06.798 8632.855 - 8685.494: 10.0193% ( 460) 00:10:06.798 8685.494 - 8738.133: 13.8626% ( 519) 00:10:06.798 8738.133 - 8790.773: 18.1872% ( 584) 00:10:06.798 8790.773 - 8843.412: 22.8747% ( 633) 00:10:06.798 8843.412 - 8896.051: 27.8066% ( 666) 00:10:06.798 8896.051 - 8948.691: 32.9754% ( 698) 00:10:06.798 8948.691 - 9001.330: 38.2775% ( 716) 00:10:06.798 9001.330 - 9053.969: 43.8611% ( 754) 00:10:06.798 9053.969 - 9106.609: 49.5853% ( 773) 00:10:06.798 9106.609 - 9159.248: 55.3540% ( 779) 00:10:06.798 9159.248 - 9211.888: 61.0856% ( 774) 00:10:06.798 9211.888 - 9264.527: 66.6543% ( 752) 00:10:06.798 9264.527 - 9317.166: 71.9491% ( 715) 00:10:06.798 9317.166 - 9369.806: 76.7847% ( 653) 00:10:06.798 9369.806 - 9422.445: 80.7983% ( 542) 00:10:06.798 9422.445 - 9475.084: 84.3232% ( 476) 00:10:06.798 9475.084 - 9527.724: 87.2704% ( 398) 00:10:06.798 9527.724 - 9580.363: 89.6253% ( 318) 00:10:06.798 9580.363 - 9633.002: 91.4766% ( 250) 00:10:06.798 9633.002 - 9685.642: 93.0021% ( 206) 00:10:06.798 9685.642 - 9738.281: 94.1647% ( 157) 00:10:06.798 9738.281 - 9790.920: 95.0459% ( 119) 00:10:06.798 9790.920 - 9843.560: 95.7642% ( 97) 00:10:06.798 9843.560 - 9896.199: 96.2307% ( 63) 00:10:06.798 9896.199 - 9948.839: 96.4825% ( 34) 00:10:06.798 9948.839 - 10001.478: 96.6825% ( 27) 00:10:06.798 10001.478 - 10054.117: 96.8009% ( 16) 00:10:06.798 10054.117 - 10106.757: 96.9046% ( 14) 00:10:06.798 10106.757 - 10159.396: 96.9861% ( 11) 00:10:06.798 10159.396 - 10212.035: 97.0157% ( 4) 00:10:06.798 10212.035 - 10264.675: 97.0601% ( 6) 00:10:06.798 10264.675 - 10317.314: 97.1120% ( 7) 00:10:06.798 10317.314 - 10369.953: 97.1564% ( 6) 00:10:06.798 10369.953 - 10422.593: 97.1934% ( 5) 00:10:06.798 10422.593 - 10475.232: 97.2379% ( 6) 00:10:06.798 10475.232 - 10527.871: 97.2601% ( 3) 00:10:06.798 10527.871 - 10580.511: 97.2823% ( 3) 00:10:06.798 10580.511 - 10633.150: 97.3267% ( 6) 00:10:06.798 10633.150 - 10685.790: 97.3711% ( 6) 00:10:06.798 10685.790 - 10738.429: 97.4082% ( 5) 00:10:06.798 10738.429 - 10791.068: 97.4452% ( 5) 00:10:06.798 10791.068 - 10843.708: 97.5044% ( 8) 00:10:06.798 10843.708 - 10896.347: 97.5489% ( 6) 00:10:06.798 10896.347 - 10948.986: 97.5859% ( 5) 00:10:06.798 10948.986 - 11001.626: 97.6155% ( 4) 00:10:06.798 11001.626 - 11054.265: 97.6600% ( 6) 00:10:06.798 11054.265 - 11106.904: 97.6896% ( 4) 00:10:06.798 11106.904 - 11159.544: 97.7340% ( 6) 00:10:06.798 11159.544 - 11212.183: 97.7710% ( 5) 00:10:06.798 11212.183 - 11264.822: 97.8155% ( 6) 00:10:06.798 11264.822 - 11317.462: 97.8673% ( 7) 00:10:06.798 11317.462 - 11370.101: 97.9117% ( 6) 00:10:06.798 11370.101 - 11422.741: 97.9636% ( 7) 00:10:06.798 11422.741 - 11475.380: 97.9932% ( 4) 00:10:06.798 11475.380 - 11528.019: 98.0376% ( 6) 00:10:06.798 11528.019 - 11580.659: 98.0598% ( 3) 00:10:06.798 11580.659 - 11633.298: 98.0746% ( 2) 00:10:06.798 11633.298 - 11685.937: 98.0969% ( 3) 00:10:06.798 11685.937 - 11738.577: 98.1043% ( 1) 00:10:06.798 12001.773 - 12054.413: 98.1339% ( 4) 00:10:06.798 12054.413 - 12107.052: 98.1487% ( 2) 00:10:06.798 12107.052 - 12159.692: 98.1635% ( 2) 00:10:06.798 12159.692 - 12212.331: 98.1783% ( 2) 00:10:06.798 12212.331 - 12264.970: 98.2005% ( 3) 00:10:06.798 12264.970 - 12317.610: 98.2153% ( 2) 00:10:06.798 12317.610 - 12370.249: 98.2302% ( 2) 00:10:06.798 12370.249 - 12422.888: 98.2450% ( 2) 00:10:06.798 12422.888 - 12475.528: 98.2672% ( 3) 00:10:06.798 12475.528 - 12528.167: 98.2968% ( 4) 00:10:06.798 12528.167 - 12580.806: 98.3338% ( 5) 00:10:06.798 12580.806 - 12633.446: 98.3783% ( 6) 00:10:06.798 12633.446 - 12686.085: 98.4375% ( 8) 00:10:06.798 12686.085 - 12738.724: 98.4745% ( 5) 00:10:06.798 12738.724 - 12791.364: 98.5190% ( 6) 00:10:06.798 12791.364 - 12844.003: 98.5634% ( 6) 00:10:06.798 12844.003 - 12896.643: 98.6152% ( 7) 00:10:06.798 12896.643 - 12949.282: 98.6597% ( 6) 00:10:06.798 12949.282 - 13001.921: 98.7115% ( 7) 00:10:06.798 13001.921 - 13054.561: 98.7559% ( 6) 00:10:06.798 13054.561 - 13107.200: 98.8004% ( 6) 00:10:06.798 13107.200 - 13159.839: 98.8596% ( 8) 00:10:06.798 13159.839 - 13212.479: 98.9040% ( 6) 00:10:06.798 13212.479 - 13265.118: 98.9485% ( 6) 00:10:06.798 13265.118 - 13317.757: 99.0003% ( 7) 00:10:06.798 13317.757 - 13370.397: 99.0447% ( 6) 00:10:06.798 13370.397 - 13423.036: 99.0521% ( 1) 00:10:06.798 30951.942 - 31162.500: 99.1040% ( 7) 00:10:06.798 31162.500 - 31373.057: 99.1558% ( 7) 00:10:06.798 31373.057 - 31583.614: 99.2076% ( 7) 00:10:06.798 31583.614 - 31794.172: 99.2595% ( 7) 00:10:06.798 31794.172 - 32004.729: 99.3113% ( 7) 00:10:06.798 32004.729 - 32215.287: 99.3706% ( 8) 00:10:06.798 32215.287 - 32425.844: 99.4224% ( 7) 00:10:06.798 32425.844 - 32636.402: 99.4816% ( 8) 00:10:06.798 32636.402 - 32846.959: 99.5261% ( 6) 00:10:06.798 39374.239 - 39584.797: 99.5409% ( 2) 00:10:06.798 39584.797 - 39795.354: 99.5927% ( 7) 00:10:06.798 39795.354 - 40005.912: 99.6445% ( 7) 00:10:06.798 40005.912 - 40216.469: 99.6964% ( 7) 00:10:06.798 40216.469 - 40427.027: 99.7482% ( 7) 00:10:06.798 40427.027 - 40637.584: 99.8001% ( 7) 00:10:06.798 40637.584 - 40848.141: 99.8519% ( 7) 00:10:06.798 40848.141 - 41058.699: 99.9037% ( 7) 00:10:06.798 41058.699 - 41269.256: 99.9630% ( 8) 00:10:06.798 41269.256 - 41479.814: 100.0000% ( 5) 00:10:06.798 00:10:06.798 11:13:44 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:08.177 Initializing NVMe Controllers 00:10:08.177 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:08.178 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:08.178 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:08.178 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:08.178 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:08.178 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:08.178 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:08.178 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:08.178 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:08.178 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:08.178 Initialization complete. Launching workers. 00:10:08.178 ======================================================== 00:10:08.178 Latency(us) 00:10:08.178 Device Information : IOPS MiB/s Average min max 00:10:08.178 PCIE (0000:00:10.0) NSID 1 from core 0: 13246.07 155.23 9694.94 6991.64 45030.56 00:10:08.178 PCIE (0000:00:11.0) NSID 1 from core 0: 13246.07 155.23 9681.33 7055.80 41401.75 00:10:08.178 PCIE (0000:00:13.0) NSID 1 from core 0: 13246.07 155.23 9667.41 6882.39 41245.10 00:10:08.178 PCIE (0000:00:12.0) NSID 1 from core 0: 13246.07 155.23 9653.67 7065.92 39130.18 00:10:08.178 PCIE (0000:00:12.0) NSID 2 from core 0: 13246.07 155.23 9640.41 7064.85 37601.49 00:10:08.178 PCIE (0000:00:12.0) NSID 3 from core 0: 13309.76 155.97 9580.64 6956.52 29895.28 00:10:08.178 ======================================================== 00:10:08.178 Total : 79540.13 932.11 9653.01 6882.39 45030.56 00:10:08.178 00:10:08.178 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:08.178 ================================================================================= 00:10:08.178 1.00000% : 7422.149us 00:10:08.178 10.00000% : 7948.543us 00:10:08.178 25.00000% : 8580.215us 00:10:08.178 50.00000% : 9211.888us 00:10:08.178 75.00000% : 10001.478us 00:10:08.178 90.00000% : 10791.068us 00:10:08.178 95.00000% : 12212.331us 00:10:08.178 98.00000% : 15160.135us 00:10:08.178 99.00000% : 18739.611us 00:10:08.178 99.50000% : 35373.648us 00:10:08.178 99.90000% : 44427.618us 00:10:08.178 99.99000% : 45059.290us 00:10:08.178 99.99900% : 45059.290us 00:10:08.178 99.99990% : 45059.290us 00:10:08.178 99.99999% : 45059.290us 00:10:08.178 00:10:08.178 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:08.178 ================================================================================= 00:10:08.178 1.00000% : 7474.789us 00:10:08.178 10.00000% : 7948.543us 00:10:08.178 25.00000% : 8580.215us 00:10:08.178 50.00000% : 9211.888us 00:10:08.178 75.00000% : 10054.117us 00:10:08.178 90.00000% : 10791.068us 00:10:08.178 95.00000% : 11896.495us 00:10:08.178 98.00000% : 15581.250us 00:10:08.178 99.00000% : 18844.890us 00:10:08.178 99.50000% : 34110.304us 00:10:08.178 99.90000% : 40848.141us 00:10:08.178 99.99000% : 41479.814us 00:10:08.178 99.99900% : 41479.814us 00:10:08.178 99.99990% : 41479.814us 00:10:08.178 99.99999% : 41479.814us 00:10:08.178 00:10:08.178 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:08.178 ================================================================================= 00:10:08.178 1.00000% : 7474.789us 00:10:08.178 10.00000% : 8001.182us 00:10:08.178 25.00000% : 8580.215us 00:10:08.178 50.00000% : 9159.248us 00:10:08.178 75.00000% : 10106.757us 00:10:08.178 90.00000% : 10791.068us 00:10:08.178 95.00000% : 12159.692us 00:10:08.178 98.00000% : 15160.135us 00:10:08.178 99.00000% : 17897.382us 00:10:08.178 99.50000% : 33478.631us 00:10:08.178 99.90000% : 41058.699us 00:10:08.178 99.99000% : 41269.256us 00:10:08.178 99.99900% : 41269.256us 00:10:08.178 99.99990% : 41269.256us 00:10:08.178 99.99999% : 41269.256us 00:10:08.178 00:10:08.178 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:08.178 ================================================================================= 00:10:08.178 1.00000% : 7474.789us 00:10:08.178 10.00000% : 7895.904us 00:10:08.178 25.00000% : 8580.215us 00:10:08.178 50.00000% : 9211.888us 00:10:08.178 75.00000% : 10159.396us 00:10:08.178 90.00000% : 10791.068us 00:10:08.178 95.00000% : 12370.249us 00:10:08.178 98.00000% : 16107.643us 00:10:08.178 99.00000% : 17476.267us 00:10:08.178 99.50000% : 31373.057us 00:10:08.178 99.90000% : 38953.124us 00:10:08.178 99.99000% : 39163.682us 00:10:08.178 99.99900% : 39163.682us 00:10:08.178 99.99990% : 39163.682us 00:10:08.178 99.99999% : 39163.682us 00:10:08.178 00:10:08.178 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:08.178 ================================================================================= 00:10:08.178 1.00000% : 7474.789us 00:10:08.178 10.00000% : 7948.543us 00:10:08.178 25.00000% : 8527.576us 00:10:08.178 50.00000% : 9211.888us 00:10:08.178 75.00000% : 10054.117us 00:10:08.178 90.00000% : 10738.429us 00:10:08.178 95.00000% : 12475.528us 00:10:08.178 98.00000% : 16212.922us 00:10:08.178 99.00000% : 17476.267us 00:10:08.178 99.50000% : 29688.598us 00:10:08.178 99.90000% : 37268.665us 00:10:08.178 99.99000% : 37689.780us 00:10:08.178 99.99900% : 37689.780us 00:10:08.178 99.99990% : 37689.780us 00:10:08.178 99.99999% : 37689.780us 00:10:08.178 00:10:08.178 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:08.178 ================================================================================= 00:10:08.178 1.00000% : 7474.789us 00:10:08.178 10.00000% : 7948.543us 00:10:08.178 25.00000% : 8580.215us 00:10:08.178 50.00000% : 9211.888us 00:10:08.178 75.00000% : 10106.757us 00:10:08.178 90.00000% : 10791.068us 00:10:08.178 95.00000% : 12896.643us 00:10:08.178 98.00000% : 16002.365us 00:10:08.178 99.00000% : 18107.939us 00:10:08.178 99.50000% : 20002.956us 00:10:08.178 99.90000% : 29688.598us 00:10:08.178 99.99000% : 29899.155us 00:10:08.178 99.99900% : 29899.155us 00:10:08.178 99.99990% : 29899.155us 00:10:08.178 99.99999% : 29899.155us 00:10:08.178 00:10:08.178 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:08.178 ============================================================================== 00:10:08.178 Range in us Cumulative IO count 00:10:08.178 6948.395 - 7001.035: 0.0225% ( 3) 00:10:08.178 7001.035 - 7053.674: 0.0376% ( 2) 00:10:08.178 7053.674 - 7106.313: 0.0601% ( 3) 00:10:08.178 7106.313 - 7158.953: 0.1127% ( 7) 00:10:08.178 7158.953 - 7211.592: 0.1578% ( 6) 00:10:08.178 7211.592 - 7264.231: 0.2704% ( 15) 00:10:08.178 7264.231 - 7316.871: 0.5183% ( 33) 00:10:08.178 7316.871 - 7369.510: 0.9465% ( 57) 00:10:08.178 7369.510 - 7422.149: 1.3972% ( 60) 00:10:08.178 7422.149 - 7474.789: 1.9156% ( 69) 00:10:08.178 7474.789 - 7527.428: 2.5691% ( 87) 00:10:08.178 7527.428 - 7580.067: 3.1926% ( 83) 00:10:08.178 7580.067 - 7632.707: 3.7335% ( 72) 00:10:08.178 7632.707 - 7685.346: 4.5373% ( 107) 00:10:08.178 7685.346 - 7737.986: 5.9420% ( 187) 00:10:08.178 7737.986 - 7790.625: 7.2491% ( 174) 00:10:08.178 7790.625 - 7843.264: 8.3684% ( 149) 00:10:08.178 7843.264 - 7895.904: 9.4501% ( 144) 00:10:08.178 7895.904 - 7948.543: 10.5394% ( 145) 00:10:08.178 7948.543 - 8001.182: 11.4183% ( 117) 00:10:08.178 8001.182 - 8053.822: 12.1845% ( 102) 00:10:08.178 8053.822 - 8106.461: 13.0033% ( 109) 00:10:08.178 8106.461 - 8159.100: 14.1602% ( 154) 00:10:08.178 8159.100 - 8211.740: 15.4147% ( 167) 00:10:08.178 8211.740 - 8264.379: 17.0147% ( 213) 00:10:08.178 8264.379 - 8317.018: 18.1941% ( 157) 00:10:08.178 8317.018 - 8369.658: 19.4712% ( 170) 00:10:08.178 8369.658 - 8422.297: 20.9660% ( 199) 00:10:08.178 8422.297 - 8474.937: 22.8816% ( 255) 00:10:08.178 8474.937 - 8527.576: 24.6995% ( 242) 00:10:08.178 8527.576 - 8580.215: 26.4648% ( 235) 00:10:08.178 8580.215 - 8632.855: 28.1851% ( 229) 00:10:08.178 8632.855 - 8685.494: 29.9429% ( 234) 00:10:08.178 8685.494 - 8738.133: 31.8735% ( 257) 00:10:08.178 8738.133 - 8790.773: 34.1346% ( 301) 00:10:08.178 8790.773 - 8843.412: 36.0577% ( 256) 00:10:08.178 8843.412 - 8896.051: 37.7479% ( 225) 00:10:08.178 8896.051 - 8948.691: 39.5207% ( 236) 00:10:08.178 8948.691 - 9001.330: 42.2401% ( 362) 00:10:08.178 9001.330 - 9053.969: 44.6139% ( 316) 00:10:08.178 9053.969 - 9106.609: 46.9952% ( 317) 00:10:08.178 9106.609 - 9159.248: 48.8957% ( 253) 00:10:08.178 9159.248 - 9211.888: 50.3456% ( 193) 00:10:08.178 9211.888 - 9264.527: 52.1409% ( 239) 00:10:08.178 9264.527 - 9317.166: 53.8161% ( 223) 00:10:08.178 9317.166 - 9369.806: 55.4537% ( 218) 00:10:08.178 9369.806 - 9422.445: 57.4669% ( 268) 00:10:08.178 9422.445 - 9475.084: 59.3074% ( 245) 00:10:08.178 9475.084 - 9527.724: 61.2605% ( 260) 00:10:08.178 9527.724 - 9580.363: 62.9056% ( 219) 00:10:08.178 9580.363 - 9633.002: 64.3930% ( 198) 00:10:08.178 9633.002 - 9685.642: 65.5048% ( 148) 00:10:08.178 9685.642 - 9738.281: 66.8870% ( 184) 00:10:08.178 9738.281 - 9790.920: 68.5847% ( 226) 00:10:08.178 9790.920 - 9843.560: 70.1547% ( 209) 00:10:08.178 9843.560 - 9896.199: 71.5144% ( 181) 00:10:08.178 9896.199 - 9948.839: 73.5652% ( 273) 00:10:08.178 9948.839 - 10001.478: 75.3606% ( 239) 00:10:08.178 10001.478 - 10054.117: 76.4573% ( 146) 00:10:08.178 10054.117 - 10106.757: 77.5316% ( 143) 00:10:08.178 10106.757 - 10159.396: 78.5607% ( 137) 00:10:08.178 10159.396 - 10212.035: 79.6199% ( 141) 00:10:08.178 10212.035 - 10264.675: 80.6040% ( 131) 00:10:08.178 10264.675 - 10317.314: 81.8885% ( 171) 00:10:08.178 10317.314 - 10369.953: 83.2933% ( 187) 00:10:08.178 10369.953 - 10422.593: 84.4201% ( 150) 00:10:08.178 10422.593 - 10475.232: 85.4868% ( 142) 00:10:08.178 10475.232 - 10527.871: 86.4183% ( 124) 00:10:08.178 10527.871 - 10580.511: 87.1695% ( 100) 00:10:08.178 10580.511 - 10633.150: 87.7779% ( 81) 00:10:08.178 10633.150 - 10685.790: 88.7019% ( 123) 00:10:08.178 10685.790 - 10738.429: 89.4231% ( 96) 00:10:08.178 10738.429 - 10791.068: 90.1893% ( 102) 00:10:08.178 10791.068 - 10843.708: 90.7001% ( 68) 00:10:08.178 10843.708 - 10896.347: 91.1208% ( 56) 00:10:08.178 10896.347 - 10948.986: 91.5790% ( 61) 00:10:08.178 10948.986 - 11001.626: 91.9321% ( 47) 00:10:08.178 11001.626 - 11054.265: 92.2100% ( 37) 00:10:08.178 11054.265 - 11106.904: 92.5030% ( 39) 00:10:08.178 11106.904 - 11159.544: 92.6983% ( 26) 00:10:08.178 11159.544 - 11212.183: 92.7885% ( 12) 00:10:08.178 11212.183 - 11264.822: 92.9462% ( 21) 00:10:08.178 11264.822 - 11317.462: 93.0739% ( 17) 00:10:08.178 11317.462 - 11370.101: 93.2692% ( 26) 00:10:08.178 11370.101 - 11422.741: 93.3969% ( 17) 00:10:08.178 11422.741 - 11475.380: 93.6223% ( 30) 00:10:08.178 11475.380 - 11528.019: 93.8627% ( 32) 00:10:08.178 11528.019 - 11580.659: 93.9979% ( 18) 00:10:08.178 11580.659 - 11633.298: 94.0880% ( 12) 00:10:08.178 11633.298 - 11685.937: 94.2082% ( 16) 00:10:08.178 11685.937 - 11738.577: 94.3510% ( 19) 00:10:08.178 11738.577 - 11791.216: 94.5162% ( 22) 00:10:08.178 11791.216 - 11843.855: 94.5688% ( 7) 00:10:08.178 11843.855 - 11896.495: 94.6965% ( 17) 00:10:08.178 11896.495 - 11949.134: 94.8017% ( 14) 00:10:08.178 11949.134 - 12001.773: 94.8993% ( 13) 00:10:08.178 12001.773 - 12054.413: 94.9219% ( 3) 00:10:08.178 12054.413 - 12107.052: 94.9444% ( 3) 00:10:08.178 12107.052 - 12159.692: 94.9745% ( 4) 00:10:08.178 12159.692 - 12212.331: 95.0120% ( 5) 00:10:08.178 12212.331 - 12264.970: 95.0270% ( 2) 00:10:08.178 12264.970 - 12317.610: 95.0346% ( 1) 00:10:08.178 12370.249 - 12422.888: 95.0646% ( 4) 00:10:08.178 12422.888 - 12475.528: 95.0871% ( 3) 00:10:08.178 12475.528 - 12528.167: 95.1022% ( 2) 00:10:08.178 12528.167 - 12580.806: 95.1322% ( 4) 00:10:08.178 12580.806 - 12633.446: 95.1698% ( 5) 00:10:08.178 12633.446 - 12686.085: 95.2374% ( 9) 00:10:08.178 12686.085 - 12738.724: 95.3275% ( 12) 00:10:08.178 12738.724 - 12791.364: 95.4778% ( 20) 00:10:08.179 12791.364 - 12844.003: 95.6130% ( 18) 00:10:08.179 12844.003 - 12896.643: 95.7257% ( 15) 00:10:08.179 12896.643 - 12949.282: 95.8083% ( 11) 00:10:08.179 12949.282 - 13001.921: 95.9059% ( 13) 00:10:08.179 13001.921 - 13054.561: 95.9585% ( 7) 00:10:08.179 13054.561 - 13107.200: 96.0261% ( 9) 00:10:08.179 13107.200 - 13159.839: 96.1088% ( 11) 00:10:08.179 13159.839 - 13212.479: 96.1388% ( 4) 00:10:08.179 13212.479 - 13265.118: 96.1689% ( 4) 00:10:08.179 13265.118 - 13317.757: 96.2139% ( 6) 00:10:08.179 13317.757 - 13370.397: 96.2365% ( 3) 00:10:08.179 13370.397 - 13423.036: 96.2966% ( 8) 00:10:08.179 13475.676 - 13580.954: 96.3792% ( 11) 00:10:08.179 13580.954 - 13686.233: 96.5294% ( 20) 00:10:08.179 13686.233 - 13791.512: 96.5670% ( 5) 00:10:08.179 13791.512 - 13896.790: 96.5895% ( 3) 00:10:08.179 13896.790 - 14002.069: 96.6346% ( 6) 00:10:08.179 14002.069 - 14107.348: 96.7849% ( 20) 00:10:08.179 14107.348 - 14212.627: 96.9426% ( 21) 00:10:08.179 14212.627 - 14317.905: 97.2206% ( 37) 00:10:08.179 14317.905 - 14423.184: 97.4084% ( 25) 00:10:08.179 14423.184 - 14528.463: 97.6187% ( 28) 00:10:08.179 14528.463 - 14633.741: 97.7088% ( 12) 00:10:08.179 14633.741 - 14739.020: 97.7614% ( 7) 00:10:08.179 14739.020 - 14844.299: 97.8365% ( 10) 00:10:08.179 14844.299 - 14949.578: 97.8966% ( 8) 00:10:08.179 14949.578 - 15054.856: 97.9492% ( 7) 00:10:08.179 15054.856 - 15160.135: 98.0544% ( 14) 00:10:08.179 15160.135 - 15265.414: 98.1220% ( 9) 00:10:08.179 15265.414 - 15370.692: 98.1821% ( 8) 00:10:08.179 15370.692 - 15475.971: 98.2197% ( 5) 00:10:08.179 15475.971 - 15581.250: 98.2572% ( 5) 00:10:08.179 15581.250 - 15686.529: 98.2873% ( 4) 00:10:08.179 15686.529 - 15791.807: 98.3323% ( 6) 00:10:08.179 15791.807 - 15897.086: 98.3624% ( 4) 00:10:08.179 15897.086 - 16002.365: 98.3924% ( 4) 00:10:08.179 16002.365 - 16107.643: 98.4300% ( 5) 00:10:08.179 16107.643 - 16212.922: 98.4600% ( 4) 00:10:08.179 16212.922 - 16318.201: 98.4901% ( 4) 00:10:08.179 16318.201 - 16423.480: 98.5276% ( 5) 00:10:08.179 16423.480 - 16528.758: 98.5577% ( 4) 00:10:08.179 17476.267 - 17581.545: 98.6178% ( 8) 00:10:08.179 17581.545 - 17686.824: 98.6553% ( 5) 00:10:08.179 17686.824 - 17792.103: 98.6929% ( 5) 00:10:08.179 17792.103 - 17897.382: 98.7305% ( 5) 00:10:08.179 17897.382 - 18002.660: 98.7680% ( 5) 00:10:08.179 18002.660 - 18107.939: 98.8056% ( 5) 00:10:08.179 18107.939 - 18213.218: 98.8431% ( 5) 00:10:08.179 18213.218 - 18318.496: 98.8882% ( 6) 00:10:08.179 18318.496 - 18423.775: 98.9258% ( 5) 00:10:08.179 18423.775 - 18529.054: 98.9633% ( 5) 00:10:08.179 18529.054 - 18634.333: 98.9934% ( 4) 00:10:08.179 18634.333 - 18739.611: 99.0385% ( 6) 00:10:08.179 32846.959 - 33057.516: 99.0910% ( 7) 00:10:08.179 33057.516 - 33268.074: 99.1887% ( 13) 00:10:08.179 33268.074 - 33478.631: 99.2413% ( 7) 00:10:08.179 33478.631 - 33689.189: 99.2713% ( 4) 00:10:08.179 33689.189 - 33899.746: 99.3239% ( 7) 00:10:08.179 33899.746 - 34110.304: 99.3765% ( 7) 00:10:08.179 34110.304 - 34320.861: 99.4291% ( 7) 00:10:08.179 34320.861 - 34531.418: 99.4591% ( 4) 00:10:08.179 34741.976 - 34952.533: 99.4742% ( 2) 00:10:08.179 34952.533 - 35163.091: 99.4892% ( 2) 00:10:08.179 35163.091 - 35373.648: 99.5117% ( 3) 00:10:08.179 35373.648 - 35584.206: 99.5192% ( 1) 00:10:08.179 41479.814 - 41690.371: 99.5267% ( 1) 00:10:08.179 41900.929 - 42111.486: 99.5343% ( 1) 00:10:08.179 42743.158 - 42953.716: 99.5718% ( 5) 00:10:08.179 42953.716 - 43164.273: 99.6169% ( 6) 00:10:08.179 43164.273 - 43374.831: 99.6695% ( 7) 00:10:08.179 43374.831 - 43585.388: 99.7221% ( 7) 00:10:08.179 43585.388 - 43795.945: 99.7671% ( 6) 00:10:08.179 43795.945 - 44006.503: 99.8197% ( 7) 00:10:08.179 44006.503 - 44217.060: 99.8573% ( 5) 00:10:08.179 44217.060 - 44427.618: 99.9023% ( 6) 00:10:08.179 44427.618 - 44638.175: 99.9399% ( 5) 00:10:08.179 44638.175 - 44848.733: 99.9850% ( 6) 00:10:08.179 44848.733 - 45059.290: 100.0000% ( 2) 00:10:08.179 00:10:08.179 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:08.179 ============================================================================== 00:10:08.179 Range in us Cumulative IO count 00:10:08.179 7053.674 - 7106.313: 0.0150% ( 2) 00:10:08.179 7158.953 - 7211.592: 0.0376% ( 3) 00:10:08.179 7211.592 - 7264.231: 0.1728% ( 18) 00:10:08.179 7264.231 - 7316.871: 0.4432% ( 36) 00:10:08.179 7316.871 - 7369.510: 0.5559% ( 15) 00:10:08.179 7369.510 - 7422.149: 0.7812% ( 30) 00:10:08.179 7422.149 - 7474.789: 1.1794% ( 53) 00:10:08.179 7474.789 - 7527.428: 1.7127% ( 71) 00:10:08.179 7527.428 - 7580.067: 2.6067% ( 119) 00:10:08.179 7580.067 - 7632.707: 3.4330% ( 110) 00:10:08.179 7632.707 - 7685.346: 4.2969% ( 115) 00:10:08.179 7685.346 - 7737.986: 5.7467% ( 193) 00:10:08.179 7737.986 - 7790.625: 6.7533% ( 134) 00:10:08.179 7790.625 - 7843.264: 8.1806% ( 190) 00:10:08.179 7843.264 - 7895.904: 9.7130% ( 204) 00:10:08.179 7895.904 - 7948.543: 11.0126% ( 173) 00:10:08.179 7948.543 - 8001.182: 11.9516% ( 125) 00:10:08.179 8001.182 - 8053.822: 12.6502% ( 93) 00:10:08.179 8053.822 - 8106.461: 13.4615% ( 108) 00:10:08.179 8106.461 - 8159.100: 14.2203% ( 101) 00:10:08.179 8159.100 - 8211.740: 14.9038% ( 91) 00:10:08.179 8211.740 - 8264.379: 16.0682% ( 155) 00:10:08.179 8264.379 - 8317.018: 17.0297% ( 128) 00:10:08.179 8317.018 - 8369.658: 18.9904% ( 261) 00:10:08.179 8369.658 - 8422.297: 20.6430% ( 220) 00:10:08.179 8422.297 - 8474.937: 22.4234% ( 237) 00:10:08.179 8474.937 - 8527.576: 24.1962% ( 236) 00:10:08.179 8527.576 - 8580.215: 25.3981% ( 160) 00:10:08.179 8580.215 - 8632.855: 26.9832% ( 211) 00:10:08.179 8632.855 - 8685.494: 28.8386% ( 247) 00:10:08.179 8685.494 - 8738.133: 30.9495% ( 281) 00:10:08.179 8738.133 - 8790.773: 34.2022% ( 433) 00:10:08.179 8790.773 - 8843.412: 36.5009% ( 306) 00:10:08.179 8843.412 - 8896.051: 38.4465% ( 259) 00:10:08.179 8896.051 - 8948.691: 40.5950% ( 286) 00:10:08.179 8948.691 - 9001.330: 42.9537% ( 314) 00:10:08.179 9001.330 - 9053.969: 44.7566% ( 240) 00:10:08.179 9053.969 - 9106.609: 46.3942% ( 218) 00:10:08.179 9106.609 - 9159.248: 48.6854% ( 305) 00:10:08.179 9159.248 - 9211.888: 50.7061% ( 269) 00:10:08.179 9211.888 - 9264.527: 52.2085% ( 200) 00:10:08.179 9264.527 - 9317.166: 53.6508% ( 192) 00:10:08.179 9317.166 - 9369.806: 55.3185% ( 222) 00:10:08.179 9369.806 - 9422.445: 57.1890% ( 249) 00:10:08.179 9422.445 - 9475.084: 59.3374% ( 286) 00:10:08.179 9475.084 - 9527.724: 60.8023% ( 195) 00:10:08.179 9527.724 - 9580.363: 62.6653% ( 248) 00:10:08.179 9580.363 - 9633.002: 64.1827% ( 202) 00:10:08.179 9633.002 - 9685.642: 65.4447% ( 168) 00:10:08.179 9685.642 - 9738.281: 66.9621% ( 202) 00:10:08.179 9738.281 - 9790.920: 68.2843% ( 176) 00:10:08.179 9790.920 - 9843.560: 69.9069% ( 216) 00:10:08.179 9843.560 - 9896.199: 71.5144% ( 214) 00:10:08.179 9896.199 - 9948.839: 73.2197% ( 227) 00:10:08.179 9948.839 - 10001.478: 74.9775% ( 234) 00:10:08.179 10001.478 - 10054.117: 76.9156% ( 258) 00:10:08.179 10054.117 - 10106.757: 78.2151% ( 173) 00:10:08.179 10106.757 - 10159.396: 79.3720% ( 154) 00:10:08.179 10159.396 - 10212.035: 80.3035% ( 124) 00:10:08.179 10212.035 - 10264.675: 81.2876% ( 131) 00:10:08.179 10264.675 - 10317.314: 82.4069% ( 149) 00:10:08.179 10317.314 - 10369.953: 83.3609% ( 127) 00:10:08.179 10369.953 - 10422.593: 84.2473% ( 118) 00:10:08.179 10422.593 - 10475.232: 85.4041% ( 154) 00:10:08.179 10475.232 - 10527.871: 86.4558% ( 140) 00:10:08.179 10527.871 - 10580.511: 87.4624% ( 134) 00:10:08.179 10580.511 - 10633.150: 88.3789% ( 122) 00:10:08.179 10633.150 - 10685.790: 89.1301% ( 100) 00:10:08.179 10685.790 - 10738.429: 89.6559% ( 70) 00:10:08.179 10738.429 - 10791.068: 90.1818% ( 70) 00:10:08.179 10791.068 - 10843.708: 90.6701% ( 65) 00:10:08.179 10843.708 - 10896.347: 91.4588% ( 105) 00:10:08.179 10896.347 - 10948.986: 91.9471% ( 65) 00:10:08.179 10948.986 - 11001.626: 92.6007% ( 87) 00:10:08.179 11001.626 - 11054.265: 92.8711% ( 36) 00:10:08.179 11054.265 - 11106.904: 93.0138% ( 19) 00:10:08.179 11106.904 - 11159.544: 93.1190% ( 14) 00:10:08.179 11159.544 - 11212.183: 93.2166% ( 13) 00:10:08.179 11212.183 - 11264.822: 93.2918% ( 10) 00:10:08.179 11264.822 - 11317.462: 93.3819% ( 12) 00:10:08.179 11317.462 - 11370.101: 93.4946% ( 15) 00:10:08.179 11370.101 - 11422.741: 93.5998% ( 14) 00:10:08.179 11422.741 - 11475.380: 93.7124% ( 15) 00:10:08.179 11475.380 - 11528.019: 93.9528% ( 32) 00:10:08.179 11528.019 - 11580.659: 94.0730% ( 16) 00:10:08.179 11580.659 - 11633.298: 94.1782% ( 14) 00:10:08.179 11633.298 - 11685.937: 94.2834% ( 14) 00:10:08.179 11685.937 - 11738.577: 94.4486% ( 22) 00:10:08.179 11738.577 - 11791.216: 94.6815% ( 31) 00:10:08.179 11791.216 - 11843.855: 94.9369% ( 34) 00:10:08.179 11843.855 - 11896.495: 95.0195% ( 11) 00:10:08.179 11896.495 - 11949.134: 95.1022% ( 11) 00:10:08.179 11949.134 - 12001.773: 95.1547% ( 7) 00:10:08.179 12001.773 - 12054.413: 95.1773% ( 3) 00:10:08.179 12054.413 - 12107.052: 95.2224% ( 6) 00:10:08.179 12107.052 - 12159.692: 95.2524% ( 4) 00:10:08.179 12159.692 - 12212.331: 95.2825% ( 4) 00:10:08.179 12212.331 - 12264.970: 95.2975% ( 2) 00:10:08.179 12264.970 - 12317.610: 95.3200% ( 3) 00:10:08.179 12317.610 - 12370.249: 95.3275% ( 1) 00:10:08.179 12370.249 - 12422.888: 95.3576% ( 4) 00:10:08.179 12422.888 - 12475.528: 95.3876% ( 4) 00:10:08.179 12475.528 - 12528.167: 95.5003% ( 15) 00:10:08.179 12528.167 - 12580.806: 95.5754% ( 10) 00:10:08.179 12580.806 - 12633.446: 95.6806% ( 14) 00:10:08.179 12633.446 - 12686.085: 95.7933% ( 15) 00:10:08.179 12686.085 - 12738.724: 95.9435% ( 20) 00:10:08.179 12738.724 - 12791.364: 96.0036% ( 8) 00:10:08.179 12791.364 - 12844.003: 96.0487% ( 6) 00:10:08.179 12844.003 - 12896.643: 96.0787% ( 4) 00:10:08.179 12896.643 - 12949.282: 96.1163% ( 5) 00:10:08.179 12949.282 - 13001.921: 96.1689% ( 7) 00:10:08.179 13001.921 - 13054.561: 96.1989% ( 4) 00:10:08.179 13054.561 - 13107.200: 96.2440% ( 6) 00:10:08.179 13107.200 - 13159.839: 96.2740% ( 4) 00:10:08.179 13159.839 - 13212.479: 96.3266% ( 7) 00:10:08.179 13212.479 - 13265.118: 96.3717% ( 6) 00:10:08.179 13265.118 - 13317.757: 96.4694% ( 13) 00:10:08.179 13317.757 - 13370.397: 96.5069% ( 5) 00:10:08.179 13370.397 - 13423.036: 96.5370% ( 4) 00:10:08.179 13423.036 - 13475.676: 96.5595% ( 3) 00:10:08.179 13475.676 - 13580.954: 96.5895% ( 4) 00:10:08.179 13580.954 - 13686.233: 96.6196% ( 4) 00:10:08.179 13686.233 - 13791.512: 96.6346% ( 2) 00:10:08.179 14423.184 - 14528.463: 96.6421% ( 1) 00:10:08.179 14528.463 - 14633.741: 96.6722% ( 4) 00:10:08.179 14633.741 - 14739.020: 96.7473% ( 10) 00:10:08.179 14739.020 - 14844.299: 96.9050% ( 21) 00:10:08.179 14844.299 - 14949.578: 97.0778% ( 23) 00:10:08.179 14949.578 - 15054.856: 97.3107% ( 31) 00:10:08.179 15054.856 - 15160.135: 97.5661% ( 34) 00:10:08.179 15160.135 - 15265.414: 97.7314% ( 22) 00:10:08.179 15265.414 - 15370.692: 97.8365% ( 14) 00:10:08.179 15370.692 - 15475.971: 97.9342% ( 13) 00:10:08.179 15475.971 - 15581.250: 98.0995% ( 22) 00:10:08.179 15581.250 - 15686.529: 98.3549% ( 34) 00:10:08.179 15686.529 - 15791.807: 98.4826% ( 17) 00:10:08.179 15791.807 - 15897.086: 98.5427% ( 8) 00:10:08.179 15897.086 - 16002.365: 98.5577% ( 2) 00:10:08.179 17792.103 - 17897.382: 98.6028% ( 6) 00:10:08.179 17897.382 - 18002.660: 98.6854% ( 11) 00:10:08.179 18002.660 - 18107.939: 98.7305% ( 6) 00:10:08.179 18107.939 - 18213.218: 98.7605% ( 4) 00:10:08.179 18213.218 - 18318.496: 98.8056% ( 6) 00:10:08.179 18318.496 - 18423.775: 98.8507% ( 6) 00:10:08.179 18423.775 - 18529.054: 98.8957% ( 6) 00:10:08.179 18529.054 - 18634.333: 98.9408% ( 6) 00:10:08.179 18634.333 - 18739.611: 98.9784% ( 5) 00:10:08.179 18739.611 - 18844.890: 99.0234% ( 6) 00:10:08.179 18844.890 - 18950.169: 99.0385% ( 2) 00:10:08.179 32004.729 - 32215.287: 99.0460% ( 1) 00:10:08.179 32846.959 - 33057.516: 99.1587% ( 15) 00:10:08.179 33057.516 - 33268.074: 99.2413% ( 11) 00:10:08.180 33268.074 - 33478.631: 99.3690% ( 17) 00:10:08.180 33478.631 - 33689.189: 99.4591% ( 12) 00:10:08.180 33689.189 - 33899.746: 99.4967% ( 5) 00:10:08.180 33899.746 - 34110.304: 99.5192% ( 3) 00:10:08.180 39795.354 - 40005.912: 99.5267% ( 1) 00:10:08.180 40005.912 - 40216.469: 99.6244% ( 13) 00:10:08.180 40216.469 - 40427.027: 99.7446% ( 16) 00:10:08.180 40427.027 - 40637.584: 99.8197% ( 10) 00:10:08.180 40637.584 - 40848.141: 99.9174% ( 13) 00:10:08.180 40848.141 - 41058.699: 99.9700% ( 7) 00:10:08.180 41058.699 - 41269.256: 99.9850% ( 2) 00:10:08.180 41269.256 - 41479.814: 100.0000% ( 2) 00:10:08.180 00:10:08.180 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:08.180 ============================================================================== 00:10:08.180 Range in us Cumulative IO count 00:10:08.180 6843.116 - 6895.756: 0.0075% ( 1) 00:10:08.180 7001.035 - 7053.674: 0.1277% ( 16) 00:10:08.180 7053.674 - 7106.313: 0.4132% ( 38) 00:10:08.180 7106.313 - 7158.953: 0.4883% ( 10) 00:10:08.180 7158.953 - 7211.592: 0.4958% ( 1) 00:10:08.180 7211.592 - 7264.231: 0.5033% ( 1) 00:10:08.180 7264.231 - 7316.871: 0.5559% ( 7) 00:10:08.180 7316.871 - 7369.510: 0.6686% ( 15) 00:10:08.180 7369.510 - 7422.149: 0.9390% ( 36) 00:10:08.180 7422.149 - 7474.789: 1.3597% ( 56) 00:10:08.180 7474.789 - 7527.428: 2.0808% ( 96) 00:10:08.180 7527.428 - 7580.067: 2.9297% ( 113) 00:10:08.180 7580.067 - 7632.707: 4.0941% ( 155) 00:10:08.180 7632.707 - 7685.346: 4.9654% ( 116) 00:10:08.180 7685.346 - 7737.986: 5.7317% ( 102) 00:10:08.180 7737.986 - 7790.625: 6.3326% ( 80) 00:10:08.180 7790.625 - 7843.264: 6.8810% ( 73) 00:10:08.180 7843.264 - 7895.904: 7.8275% ( 126) 00:10:08.180 7895.904 - 7948.543: 8.6614% ( 111) 00:10:08.180 7948.543 - 8001.182: 10.0210% ( 181) 00:10:08.180 8001.182 - 8053.822: 11.4258% ( 187) 00:10:08.180 8053.822 - 8106.461: 12.7254% ( 173) 00:10:08.180 8106.461 - 8159.100: 14.2728% ( 206) 00:10:08.180 8159.100 - 8211.740: 15.4372% ( 155) 00:10:08.180 8211.740 - 8264.379: 16.5415% ( 147) 00:10:08.180 8264.379 - 8317.018: 17.4579% ( 122) 00:10:08.180 8317.018 - 8369.658: 18.9078% ( 193) 00:10:08.180 8369.658 - 8422.297: 20.5228% ( 215) 00:10:08.180 8422.297 - 8474.937: 22.2806% ( 234) 00:10:08.180 8474.937 - 8527.576: 23.9784% ( 226) 00:10:08.180 8527.576 - 8580.215: 26.2921% ( 308) 00:10:08.180 8580.215 - 8632.855: 29.2368% ( 392) 00:10:08.180 8632.855 - 8685.494: 31.2951% ( 274) 00:10:08.180 8685.494 - 8738.133: 33.5487% ( 300) 00:10:08.180 8738.133 - 8790.773: 35.4192% ( 249) 00:10:08.180 8790.773 - 8843.412: 37.4850% ( 275) 00:10:08.180 8843.412 - 8896.051: 39.4982% ( 268) 00:10:08.180 8896.051 - 8948.691: 41.5941% ( 279) 00:10:08.180 8948.691 - 9001.330: 43.5772% ( 264) 00:10:08.180 9001.330 - 9053.969: 46.4168% ( 378) 00:10:08.180 9053.969 - 9106.609: 48.6178% ( 293) 00:10:08.180 9106.609 - 9159.248: 51.1043% ( 331) 00:10:08.180 9159.248 - 9211.888: 52.9597% ( 247) 00:10:08.180 9211.888 - 9264.527: 54.3870% ( 190) 00:10:08.180 9264.527 - 9317.166: 55.6115% ( 163) 00:10:08.180 9317.166 - 9369.806: 56.8209% ( 161) 00:10:08.180 9369.806 - 9422.445: 58.2707% ( 193) 00:10:08.180 9422.445 - 9475.084: 59.4952% ( 163) 00:10:08.180 9475.084 - 9527.724: 60.5769% ( 144) 00:10:08.180 9527.724 - 9580.363: 61.8540% ( 170) 00:10:08.180 9580.363 - 9633.002: 63.2061% ( 180) 00:10:08.180 9633.002 - 9685.642: 64.7386% ( 204) 00:10:08.180 9685.642 - 9738.281: 66.0907% ( 180) 00:10:08.180 9738.281 - 9790.920: 67.4504% ( 181) 00:10:08.180 9790.920 - 9843.560: 68.7049% ( 167) 00:10:08.180 9843.560 - 9896.199: 69.8543% ( 153) 00:10:08.180 9896.199 - 9948.839: 71.1163% ( 168) 00:10:08.180 9948.839 - 10001.478: 73.0769% ( 261) 00:10:08.180 10001.478 - 10054.117: 74.6019% ( 203) 00:10:08.180 10054.117 - 10106.757: 76.1118% ( 201) 00:10:08.180 10106.757 - 10159.396: 77.7118% ( 213) 00:10:08.180 10159.396 - 10212.035: 79.0114% ( 173) 00:10:08.180 10212.035 - 10264.675: 80.6040% ( 212) 00:10:08.180 10264.675 - 10317.314: 81.5655% ( 128) 00:10:08.180 10317.314 - 10369.953: 82.3918% ( 110) 00:10:08.180 10369.953 - 10422.593: 83.5111% ( 149) 00:10:08.180 10422.593 - 10475.232: 84.4201% ( 121) 00:10:08.180 10475.232 - 10527.871: 85.5769% ( 154) 00:10:08.180 10527.871 - 10580.511: 86.5835% ( 134) 00:10:08.180 10580.511 - 10633.150: 87.6352% ( 140) 00:10:08.180 10633.150 - 10685.790: 88.4766% ( 112) 00:10:08.180 10685.790 - 10738.429: 89.3404% ( 115) 00:10:08.180 10738.429 - 10791.068: 90.0240% ( 91) 00:10:08.180 10791.068 - 10843.708: 90.5198% ( 66) 00:10:08.180 10843.708 - 10896.347: 90.9931% ( 63) 00:10:08.180 10896.347 - 10948.986: 91.4363% ( 59) 00:10:08.180 10948.986 - 11001.626: 91.8419% ( 54) 00:10:08.180 11001.626 - 11054.265: 92.0823% ( 32) 00:10:08.180 11054.265 - 11106.904: 92.2326% ( 20) 00:10:08.180 11106.904 - 11159.544: 92.3603% ( 17) 00:10:08.180 11159.544 - 11212.183: 92.4730% ( 15) 00:10:08.180 11212.183 - 11264.822: 92.5331% ( 8) 00:10:08.180 11264.822 - 11317.462: 92.6007% ( 9) 00:10:08.180 11317.462 - 11370.101: 92.6908% ( 12) 00:10:08.180 11370.101 - 11422.741: 92.7734% ( 11) 00:10:08.180 11422.741 - 11475.380: 92.8486% ( 10) 00:10:08.180 11475.380 - 11528.019: 92.9462% ( 13) 00:10:08.180 11528.019 - 11580.659: 93.2767% ( 44) 00:10:08.180 11580.659 - 11633.298: 93.4270% ( 20) 00:10:08.180 11633.298 - 11685.937: 93.5772% ( 20) 00:10:08.180 11685.937 - 11738.577: 93.6523% ( 10) 00:10:08.180 11738.577 - 11791.216: 93.7200% ( 9) 00:10:08.180 11791.216 - 11843.855: 93.8176% ( 13) 00:10:08.180 11843.855 - 11896.495: 93.9078% ( 12) 00:10:08.180 11896.495 - 11949.134: 94.0279% ( 16) 00:10:08.180 11949.134 - 12001.773: 94.2458% ( 29) 00:10:08.180 12001.773 - 12054.413: 94.4035% ( 21) 00:10:08.180 12054.413 - 12107.052: 94.6740% ( 36) 00:10:08.180 12107.052 - 12159.692: 95.0947% ( 56) 00:10:08.180 12159.692 - 12212.331: 95.3350% ( 32) 00:10:08.180 12212.331 - 12264.970: 95.5679% ( 31) 00:10:08.180 12264.970 - 12317.610: 95.6656% ( 13) 00:10:08.180 12317.610 - 12370.249: 95.7782% ( 15) 00:10:08.180 12370.249 - 12422.888: 95.8984% ( 16) 00:10:08.180 12422.888 - 12475.528: 96.0487% ( 20) 00:10:08.180 12475.528 - 12528.167: 96.1463% ( 13) 00:10:08.180 12528.167 - 12580.806: 96.1914% ( 6) 00:10:08.180 12580.806 - 12633.446: 96.2590% ( 9) 00:10:08.180 12633.446 - 12686.085: 96.2966% ( 5) 00:10:08.180 12686.085 - 12738.724: 96.3567% ( 8) 00:10:08.180 12738.724 - 12791.364: 96.4769% ( 16) 00:10:08.180 12791.364 - 12844.003: 96.5144% ( 5) 00:10:08.180 12844.003 - 12896.643: 96.5294% ( 2) 00:10:08.180 12896.643 - 12949.282: 96.6046% ( 10) 00:10:08.180 12949.282 - 13001.921: 96.6797% ( 10) 00:10:08.180 13001.921 - 13054.561: 96.7698% ( 12) 00:10:08.180 13054.561 - 13107.200: 96.9276% ( 21) 00:10:08.180 13107.200 - 13159.839: 96.9877% ( 8) 00:10:08.180 13159.839 - 13212.479: 97.0478% ( 8) 00:10:08.180 13212.479 - 13265.118: 97.1079% ( 8) 00:10:08.180 13265.118 - 13317.757: 97.1154% ( 1) 00:10:08.180 13791.512 - 13896.790: 97.1229% ( 1) 00:10:08.180 14002.069 - 14107.348: 97.1454% ( 3) 00:10:08.180 14107.348 - 14212.627: 97.2055% ( 8) 00:10:08.180 14212.627 - 14317.905: 97.2731% ( 9) 00:10:08.180 14317.905 - 14423.184: 97.3858% ( 15) 00:10:08.180 14423.184 - 14528.463: 97.5361% ( 20) 00:10:08.180 14528.463 - 14633.741: 97.6487% ( 15) 00:10:08.180 14633.741 - 14739.020: 97.6938% ( 6) 00:10:08.180 14739.020 - 14844.299: 97.7464% ( 7) 00:10:08.180 14844.299 - 14949.578: 97.8290% ( 11) 00:10:08.180 14949.578 - 15054.856: 97.9567% ( 17) 00:10:08.180 15054.856 - 15160.135: 98.0018% ( 6) 00:10:08.180 15160.135 - 15265.414: 98.0319% ( 4) 00:10:08.180 15265.414 - 15370.692: 98.0619% ( 4) 00:10:08.180 15370.692 - 15475.971: 98.0769% ( 2) 00:10:08.180 15791.807 - 15897.086: 98.0844% ( 1) 00:10:08.180 16002.365 - 16107.643: 98.1671% ( 11) 00:10:08.180 16107.643 - 16212.922: 98.3023% ( 18) 00:10:08.180 16212.922 - 16318.201: 98.4976% ( 26) 00:10:08.180 16318.201 - 16423.480: 98.5577% ( 8) 00:10:08.180 16844.594 - 16949.873: 98.5652% ( 1) 00:10:08.180 17055.152 - 17160.431: 98.6403% ( 10) 00:10:08.180 17160.431 - 17265.709: 98.7455% ( 14) 00:10:08.180 17265.709 - 17370.988: 98.8281% ( 11) 00:10:08.180 17370.988 - 17476.267: 98.8431% ( 2) 00:10:08.180 17476.267 - 17581.545: 98.8807% ( 5) 00:10:08.180 17581.545 - 17686.824: 98.9108% ( 4) 00:10:08.180 17686.824 - 17792.103: 98.9558% ( 6) 00:10:08.180 17792.103 - 17897.382: 99.0009% ( 6) 00:10:08.180 17897.382 - 18002.660: 99.0385% ( 5) 00:10:08.180 31373.057 - 31583.614: 99.0460% ( 1) 00:10:08.180 31583.614 - 31794.172: 99.0986% ( 7) 00:10:08.180 31794.172 - 32004.729: 99.1511% ( 7) 00:10:08.180 32004.729 - 32215.287: 99.2037% ( 7) 00:10:08.180 32215.287 - 32425.844: 99.2638% ( 8) 00:10:08.180 32425.844 - 32636.402: 99.3164% ( 7) 00:10:08.180 32636.402 - 32846.959: 99.3765% ( 8) 00:10:08.180 32846.959 - 33057.516: 99.4366% ( 8) 00:10:08.180 33057.516 - 33268.074: 99.4892% ( 7) 00:10:08.180 33268.074 - 33478.631: 99.5192% ( 4) 00:10:08.180 39163.682 - 39374.239: 99.5343% ( 2) 00:10:08.180 39374.239 - 39584.797: 99.5793% ( 6) 00:10:08.180 39584.797 - 39795.354: 99.6319% ( 7) 00:10:08.180 39795.354 - 40005.912: 99.6770% ( 6) 00:10:08.180 40005.912 - 40216.469: 99.7296% ( 7) 00:10:08.180 40216.469 - 40427.027: 99.7822% ( 7) 00:10:08.180 40427.027 - 40637.584: 99.8422% ( 8) 00:10:08.180 40637.584 - 40848.141: 99.8948% ( 7) 00:10:08.180 40848.141 - 41058.699: 99.9399% ( 6) 00:10:08.180 41058.699 - 41269.256: 100.0000% ( 8) 00:10:08.180 00:10:08.180 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:08.180 ============================================================================== 00:10:08.180 Range in us Cumulative IO count 00:10:08.180 7053.674 - 7106.313: 0.0075% ( 1) 00:10:08.180 7106.313 - 7158.953: 0.0300% ( 3) 00:10:08.180 7158.953 - 7211.592: 0.0826% ( 7) 00:10:08.180 7211.592 - 7264.231: 0.1653% ( 11) 00:10:08.180 7264.231 - 7316.871: 0.2254% ( 8) 00:10:08.180 7316.871 - 7369.510: 0.5559% ( 44) 00:10:08.180 7369.510 - 7422.149: 0.7812% ( 30) 00:10:08.180 7422.149 - 7474.789: 1.1569% ( 50) 00:10:08.180 7474.789 - 7527.428: 1.7428% ( 78) 00:10:08.180 7527.428 - 7580.067: 2.5090% ( 102) 00:10:08.180 7580.067 - 7632.707: 3.5306% ( 136) 00:10:08.180 7632.707 - 7685.346: 4.8603% ( 177) 00:10:08.180 7685.346 - 7737.986: 6.1674% ( 174) 00:10:08.180 7737.986 - 7790.625: 7.6623% ( 199) 00:10:08.180 7790.625 - 7843.264: 8.9618% ( 173) 00:10:08.180 7843.264 - 7895.904: 10.0285% ( 142) 00:10:08.180 7895.904 - 7948.543: 11.1629% ( 151) 00:10:08.180 7948.543 - 8001.182: 11.9817% ( 109) 00:10:08.180 8001.182 - 8053.822: 12.9132% ( 124) 00:10:08.180 8053.822 - 8106.461: 13.8972% ( 131) 00:10:08.180 8106.461 - 8159.100: 14.7912% ( 119) 00:10:08.180 8159.100 - 8211.740: 15.9706% ( 157) 00:10:08.180 8211.740 - 8264.379: 16.9621% ( 132) 00:10:08.180 8264.379 - 8317.018: 17.9462% ( 131) 00:10:08.180 8317.018 - 8369.658: 19.2834% ( 178) 00:10:08.180 8369.658 - 8422.297: 20.7257% ( 192) 00:10:08.180 8422.297 - 8474.937: 22.1079% ( 184) 00:10:08.180 8474.937 - 8527.576: 24.0535% ( 259) 00:10:08.180 8527.576 - 8580.215: 26.0892% ( 271) 00:10:08.180 8580.215 - 8632.855: 28.2452% ( 287) 00:10:08.180 8632.855 - 8685.494: 30.5814% ( 311) 00:10:08.180 8685.494 - 8738.133: 33.0153% ( 324) 00:10:08.180 8738.133 - 8790.773: 35.3140% ( 306) 00:10:08.180 8790.773 - 8843.412: 37.4850% ( 289) 00:10:08.180 8843.412 - 8896.051: 39.6785% ( 292) 00:10:08.180 8896.051 - 8948.691: 41.8119% ( 284) 00:10:08.180 8948.691 - 9001.330: 43.2317% ( 189) 00:10:08.180 9001.330 - 9053.969: 44.8543% ( 216) 00:10:08.180 9053.969 - 9106.609: 46.4393% ( 211) 00:10:08.180 9106.609 - 9159.248: 48.1220% ( 224) 00:10:08.180 9159.248 - 9211.888: 50.1502% ( 270) 00:10:08.180 9211.888 - 9264.527: 52.1334% ( 264) 00:10:08.180 9264.527 - 9317.166: 54.1842% ( 273) 00:10:08.180 9317.166 - 9369.806: 56.2425% ( 274) 00:10:08.180 9369.806 - 9422.445: 57.9026% ( 221) 00:10:08.181 9422.445 - 9475.084: 59.4276% ( 203) 00:10:08.181 9475.084 - 9527.724: 60.9826% ( 207) 00:10:08.181 9527.724 - 9580.363: 62.2446% ( 168) 00:10:08.181 9580.363 - 9633.002: 63.3789% ( 151) 00:10:08.181 9633.002 - 9685.642: 64.6785% ( 173) 00:10:08.181 9685.642 - 9738.281: 65.8128% ( 151) 00:10:08.181 9738.281 - 9790.920: 67.2251% ( 188) 00:10:08.181 9790.920 - 9843.560: 68.4495% ( 163) 00:10:08.181 9843.560 - 9896.199: 69.7491% ( 173) 00:10:08.181 9896.199 - 9948.839: 70.9135% ( 155) 00:10:08.181 9948.839 - 10001.478: 72.2055% ( 172) 00:10:08.181 10001.478 - 10054.117: 73.5953% ( 185) 00:10:08.181 10054.117 - 10106.757: 74.8197% ( 163) 00:10:08.181 10106.757 - 10159.396: 75.9916% ( 156) 00:10:08.181 10159.396 - 10212.035: 77.5616% ( 209) 00:10:08.181 10212.035 - 10264.675: 79.0865% ( 203) 00:10:08.181 10264.675 - 10317.314: 80.4011% ( 175) 00:10:08.181 10317.314 - 10369.953: 81.7984% ( 186) 00:10:08.181 10369.953 - 10422.593: 83.3834% ( 211) 00:10:08.181 10422.593 - 10475.232: 84.7581% ( 183) 00:10:08.181 10475.232 - 10527.871: 85.8774% ( 149) 00:10:08.181 10527.871 - 10580.511: 87.2070% ( 177) 00:10:08.181 10580.511 - 10633.150: 88.1535% ( 126) 00:10:08.181 10633.150 - 10685.790: 88.9498% ( 106) 00:10:08.181 10685.790 - 10738.429: 89.7010% ( 100) 00:10:08.181 10738.429 - 10791.068: 90.7527% ( 140) 00:10:08.181 10791.068 - 10843.708: 91.3161% ( 75) 00:10:08.181 10843.708 - 10896.347: 91.7067% ( 52) 00:10:08.181 10896.347 - 10948.986: 92.1650% ( 61) 00:10:08.181 10948.986 - 11001.626: 92.3978% ( 31) 00:10:08.181 11001.626 - 11054.265: 92.5481% ( 20) 00:10:08.181 11054.265 - 11106.904: 92.6382% ( 12) 00:10:08.181 11106.904 - 11159.544: 92.7359% ( 13) 00:10:08.181 11159.544 - 11212.183: 92.8035% ( 9) 00:10:08.181 11212.183 - 11264.822: 92.8786% ( 10) 00:10:08.181 11264.822 - 11317.462: 92.9688% ( 12) 00:10:08.181 11317.462 - 11370.101: 93.0589% ( 12) 00:10:08.181 11370.101 - 11422.741: 93.2091% ( 20) 00:10:08.181 11422.741 - 11475.380: 93.3218% ( 15) 00:10:08.181 11475.380 - 11528.019: 93.4645% ( 19) 00:10:08.181 11528.019 - 11580.659: 93.6448% ( 24) 00:10:08.181 11580.659 - 11633.298: 93.8026% ( 21) 00:10:08.181 11633.298 - 11685.937: 93.9829% ( 24) 00:10:08.181 11685.937 - 11738.577: 94.1556% ( 23) 00:10:08.181 11738.577 - 11791.216: 94.3059% ( 20) 00:10:08.181 11791.216 - 11843.855: 94.5162% ( 28) 00:10:08.181 11843.855 - 11896.495: 94.6214% ( 14) 00:10:08.181 11896.495 - 11949.134: 94.6665% ( 6) 00:10:08.181 11949.134 - 12001.773: 94.7115% ( 6) 00:10:08.181 12054.413 - 12107.052: 94.7491% ( 5) 00:10:08.181 12107.052 - 12159.692: 94.7716% ( 3) 00:10:08.181 12159.692 - 12212.331: 94.8392% ( 9) 00:10:08.181 12212.331 - 12264.970: 94.8918% ( 7) 00:10:08.181 12264.970 - 12317.610: 94.9519% ( 8) 00:10:08.181 12317.610 - 12370.249: 95.0421% ( 12) 00:10:08.181 12370.249 - 12422.888: 95.1022% ( 8) 00:10:08.181 12422.888 - 12475.528: 95.1773% ( 10) 00:10:08.181 12475.528 - 12528.167: 95.2825% ( 14) 00:10:08.181 12528.167 - 12580.806: 95.4327% ( 20) 00:10:08.181 12580.806 - 12633.446: 95.5679% ( 18) 00:10:08.181 12633.446 - 12686.085: 95.7257% ( 21) 00:10:08.181 12686.085 - 12738.724: 95.8834% ( 21) 00:10:08.181 12738.724 - 12791.364: 96.0712% ( 25) 00:10:08.181 12791.364 - 12844.003: 96.2740% ( 27) 00:10:08.181 12844.003 - 12896.643: 96.5144% ( 32) 00:10:08.181 12896.643 - 12949.282: 96.5971% ( 11) 00:10:08.181 12949.282 - 13001.921: 96.6797% ( 11) 00:10:08.181 13001.921 - 13054.561: 96.7623% ( 11) 00:10:08.181 13054.561 - 13107.200: 96.8450% ( 11) 00:10:08.181 13107.200 - 13159.839: 96.9126% ( 9) 00:10:08.181 13159.839 - 13212.479: 96.9727% ( 8) 00:10:08.181 13212.479 - 13265.118: 97.0478% ( 10) 00:10:08.181 13265.118 - 13317.757: 97.0928% ( 6) 00:10:08.181 13317.757 - 13370.397: 97.1004% ( 1) 00:10:08.181 13370.397 - 13423.036: 97.1154% ( 2) 00:10:08.181 13475.676 - 13580.954: 97.1529% ( 5) 00:10:08.181 13580.954 - 13686.233: 97.2206% ( 9) 00:10:08.181 13686.233 - 13791.512: 97.2806% ( 8) 00:10:08.181 13791.512 - 13896.790: 97.4609% ( 24) 00:10:08.181 13896.790 - 14002.069: 97.5060% ( 6) 00:10:08.181 14002.069 - 14107.348: 97.5361% ( 4) 00:10:08.181 14107.348 - 14212.627: 97.5586% ( 3) 00:10:08.181 14212.627 - 14317.905: 97.5811% ( 3) 00:10:08.181 14317.905 - 14423.184: 97.5962% ( 2) 00:10:08.181 14633.741 - 14739.020: 97.6187% ( 3) 00:10:08.181 14739.020 - 14844.299: 97.6487% ( 4) 00:10:08.181 14844.299 - 14949.578: 97.6788% ( 4) 00:10:08.181 14949.578 - 15054.856: 97.7013% ( 3) 00:10:08.181 15054.856 - 15160.135: 97.7314% ( 4) 00:10:08.181 15160.135 - 15265.414: 97.7539% ( 3) 00:10:08.181 15265.414 - 15370.692: 97.7840% ( 4) 00:10:08.181 15370.692 - 15475.971: 97.8215% ( 5) 00:10:08.181 15475.971 - 15581.250: 97.8591% ( 5) 00:10:08.181 15581.250 - 15686.529: 97.8741% ( 2) 00:10:08.181 15686.529 - 15791.807: 97.9117% ( 5) 00:10:08.181 15791.807 - 15897.086: 97.9567% ( 6) 00:10:08.181 15897.086 - 16002.365: 97.9868% ( 4) 00:10:08.181 16002.365 - 16107.643: 98.0018% ( 2) 00:10:08.181 16107.643 - 16212.922: 98.0319% ( 4) 00:10:08.181 16212.922 - 16318.201: 98.0544% ( 3) 00:10:08.181 16318.201 - 16423.480: 98.0769% ( 3) 00:10:08.181 16423.480 - 16528.758: 98.1070% ( 4) 00:10:08.181 16528.758 - 16634.037: 98.2046% ( 13) 00:10:08.181 16634.037 - 16739.316: 98.2948% ( 12) 00:10:08.181 16739.316 - 16844.594: 98.3999% ( 14) 00:10:08.181 16844.594 - 16949.873: 98.5427% ( 19) 00:10:08.181 16949.873 - 17055.152: 98.6929% ( 20) 00:10:08.181 17055.152 - 17160.431: 98.8131% ( 16) 00:10:08.181 17160.431 - 17265.709: 98.8957% ( 11) 00:10:08.181 17265.709 - 17370.988: 98.9859% ( 12) 00:10:08.181 17370.988 - 17476.267: 99.0309% ( 6) 00:10:08.181 17476.267 - 17581.545: 99.0385% ( 1) 00:10:08.181 29478.040 - 29688.598: 99.0535% ( 2) 00:10:08.181 29688.598 - 29899.155: 99.1136% ( 8) 00:10:08.181 29899.155 - 30109.712: 99.1737% ( 8) 00:10:08.181 30109.712 - 30320.270: 99.2188% ( 6) 00:10:08.181 30320.270 - 30530.827: 99.2788% ( 8) 00:10:08.181 30530.827 - 30741.385: 99.3389% ( 8) 00:10:08.181 30741.385 - 30951.942: 99.3990% ( 8) 00:10:08.181 30951.942 - 31162.500: 99.4591% ( 8) 00:10:08.181 31162.500 - 31373.057: 99.5042% ( 6) 00:10:08.181 31373.057 - 31583.614: 99.5192% ( 2) 00:10:08.181 37268.665 - 37479.222: 99.5718% ( 7) 00:10:08.181 37479.222 - 37689.780: 99.6319% ( 8) 00:10:08.181 37689.780 - 37900.337: 99.6845% ( 7) 00:10:08.181 37900.337 - 38110.895: 99.7371% ( 7) 00:10:08.181 38110.895 - 38321.452: 99.7972% ( 8) 00:10:08.181 38321.452 - 38532.010: 99.8422% ( 6) 00:10:08.181 38532.010 - 38742.567: 99.8948% ( 7) 00:10:08.181 38742.567 - 38953.124: 99.9549% ( 8) 00:10:08.181 38953.124 - 39163.682: 100.0000% ( 6) 00:10:08.181 00:10:08.181 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:08.181 ============================================================================== 00:10:08.181 Range in us Cumulative IO count 00:10:08.181 7053.674 - 7106.313: 0.0075% ( 1) 00:10:08.181 7106.313 - 7158.953: 0.0300% ( 3) 00:10:08.181 7158.953 - 7211.592: 0.0977% ( 9) 00:10:08.181 7211.592 - 7264.231: 0.1728% ( 10) 00:10:08.181 7264.231 - 7316.871: 0.3906% ( 29) 00:10:08.181 7316.871 - 7369.510: 0.5258% ( 18) 00:10:08.181 7369.510 - 7422.149: 0.7963% ( 36) 00:10:08.181 7422.149 - 7474.789: 1.2545% ( 61) 00:10:08.181 7474.789 - 7527.428: 2.0358% ( 104) 00:10:08.181 7527.428 - 7580.067: 3.0724% ( 138) 00:10:08.181 7580.067 - 7632.707: 4.1316% ( 141) 00:10:08.181 7632.707 - 7685.346: 5.0556% ( 123) 00:10:08.181 7685.346 - 7737.986: 6.0322% ( 130) 00:10:08.181 7737.986 - 7790.625: 7.0012% ( 129) 00:10:08.181 7790.625 - 7843.264: 8.2782% ( 170) 00:10:08.181 7843.264 - 7895.904: 9.2473% ( 129) 00:10:08.181 7895.904 - 7948.543: 10.3891% ( 152) 00:10:08.181 7948.543 - 8001.182: 11.1328% ( 99) 00:10:08.181 8001.182 - 8053.822: 11.6136% ( 64) 00:10:08.181 8053.822 - 8106.461: 12.0944% ( 64) 00:10:08.181 8106.461 - 8159.100: 12.7178% ( 83) 00:10:08.181 8159.100 - 8211.740: 13.5968% ( 117) 00:10:08.181 8211.740 - 8264.379: 14.8438% ( 166) 00:10:08.181 8264.379 - 8317.018: 16.6767% ( 244) 00:10:08.181 8317.018 - 8369.658: 18.6599% ( 264) 00:10:08.181 8369.658 - 8422.297: 20.7858% ( 283) 00:10:08.181 8422.297 - 8474.937: 23.5276% ( 365) 00:10:08.181 8474.937 - 8527.576: 25.7212% ( 292) 00:10:08.181 8527.576 - 8580.215: 27.2160% ( 199) 00:10:08.181 8580.215 - 8632.855: 28.9964% ( 237) 00:10:08.181 8632.855 - 8685.494: 30.4387% ( 192) 00:10:08.181 8685.494 - 8738.133: 32.6397% ( 293) 00:10:08.181 8738.133 - 8790.773: 34.2698% ( 217) 00:10:08.181 8790.773 - 8843.412: 36.3732% ( 280) 00:10:08.181 8843.412 - 8896.051: 38.4315% ( 274) 00:10:08.181 8896.051 - 8948.691: 40.7377% ( 307) 00:10:08.181 8948.691 - 9001.330: 42.9237% ( 291) 00:10:08.181 9001.330 - 9053.969: 44.9294% ( 267) 00:10:08.181 9053.969 - 9106.609: 46.9877% ( 274) 00:10:08.181 9106.609 - 9159.248: 49.0385% ( 273) 00:10:08.181 9159.248 - 9211.888: 51.1043% ( 275) 00:10:08.181 9211.888 - 9264.527: 52.7419% ( 218) 00:10:08.181 9264.527 - 9317.166: 54.6800% ( 258) 00:10:08.181 9317.166 - 9369.806: 56.6481% ( 262) 00:10:08.181 9369.806 - 9422.445: 58.3609% ( 228) 00:10:08.181 9422.445 - 9475.084: 59.8407% ( 197) 00:10:08.181 9475.084 - 9527.724: 61.1328% ( 172) 00:10:08.181 9527.724 - 9580.363: 62.3347% ( 160) 00:10:08.181 9580.363 - 9633.002: 63.8071% ( 196) 00:10:08.181 9633.002 - 9685.642: 65.3245% ( 202) 00:10:08.181 9685.642 - 9738.281: 66.6541% ( 177) 00:10:08.181 9738.281 - 9790.920: 68.2843% ( 217) 00:10:08.181 9790.920 - 9843.560: 69.7716% ( 198) 00:10:08.181 9843.560 - 9896.199: 71.1989% ( 190) 00:10:08.181 9896.199 - 9948.839: 72.3257% ( 150) 00:10:08.181 9948.839 - 10001.478: 73.4675% ( 152) 00:10:08.181 10001.478 - 10054.117: 75.1277% ( 221) 00:10:08.181 10054.117 - 10106.757: 76.3822% ( 167) 00:10:08.181 10106.757 - 10159.396: 77.8395% ( 194) 00:10:08.181 10159.396 - 10212.035: 79.0565% ( 162) 00:10:08.181 10212.035 - 10264.675: 80.3711% ( 175) 00:10:08.181 10264.675 - 10317.314: 81.3927% ( 136) 00:10:08.181 10317.314 - 10369.953: 82.5721% ( 157) 00:10:08.181 10369.953 - 10422.593: 83.8942% ( 176) 00:10:08.181 10422.593 - 10475.232: 85.0210% ( 150) 00:10:08.181 10475.232 - 10527.871: 86.4408% ( 189) 00:10:08.181 10527.871 - 10580.511: 87.5526% ( 148) 00:10:08.181 10580.511 - 10633.150: 88.7395% ( 158) 00:10:08.181 10633.150 - 10685.790: 89.6034% ( 115) 00:10:08.181 10685.790 - 10738.429: 90.3020% ( 93) 00:10:08.181 10738.429 - 10791.068: 90.9931% ( 92) 00:10:08.181 10791.068 - 10843.708: 91.4213% ( 57) 00:10:08.181 10843.708 - 10896.347: 91.8119% ( 52) 00:10:08.182 10896.347 - 10948.986: 92.3302% ( 69) 00:10:08.182 10948.986 - 11001.626: 92.4805% ( 20) 00:10:08.182 11001.626 - 11054.265: 92.5931% ( 15) 00:10:08.182 11054.265 - 11106.904: 92.7509% ( 21) 00:10:08.182 11106.904 - 11159.544: 92.8486% ( 13) 00:10:08.182 11159.544 - 11212.183: 92.9838% ( 18) 00:10:08.182 11212.183 - 11264.822: 93.0965% ( 15) 00:10:08.182 11264.822 - 11317.462: 93.2843% ( 25) 00:10:08.182 11317.462 - 11370.101: 93.5171% ( 31) 00:10:08.182 11370.101 - 11422.741: 93.5772% ( 8) 00:10:08.182 11422.741 - 11475.380: 93.6523% ( 10) 00:10:08.182 11475.380 - 11528.019: 93.7275% ( 10) 00:10:08.182 11528.019 - 11580.659: 93.7951% ( 9) 00:10:08.182 11580.659 - 11633.298: 93.8777% ( 11) 00:10:08.182 11633.298 - 11685.937: 93.9904% ( 15) 00:10:08.182 11685.937 - 11738.577: 94.1932% ( 27) 00:10:08.182 11738.577 - 11791.216: 94.2683% ( 10) 00:10:08.182 11791.216 - 11843.855: 94.3660% ( 13) 00:10:08.182 11843.855 - 11896.495: 94.5913% ( 30) 00:10:08.182 11896.495 - 11949.134: 94.6514% ( 8) 00:10:08.182 11949.134 - 12001.773: 94.6890% ( 5) 00:10:08.182 12001.773 - 12054.413: 94.7191% ( 4) 00:10:08.182 12107.052 - 12159.692: 94.7341% ( 2) 00:10:08.182 12159.692 - 12212.331: 94.7716% ( 5) 00:10:08.182 12212.331 - 12264.970: 94.8092% ( 5) 00:10:08.182 12264.970 - 12317.610: 94.8618% ( 7) 00:10:08.182 12317.610 - 12370.249: 94.9069% ( 6) 00:10:08.182 12370.249 - 12422.888: 94.9745% ( 9) 00:10:08.182 12422.888 - 12475.528: 95.0871% ( 15) 00:10:08.182 12475.528 - 12528.167: 95.1623% ( 10) 00:10:08.182 12528.167 - 12580.806: 95.2374% ( 10) 00:10:08.182 12580.806 - 12633.446: 95.3275% ( 12) 00:10:08.182 12633.446 - 12686.085: 95.4102% ( 11) 00:10:08.182 12686.085 - 12738.724: 95.4703% ( 8) 00:10:08.182 12738.724 - 12791.364: 95.5454% ( 10) 00:10:08.182 12791.364 - 12844.003: 95.6055% ( 8) 00:10:08.182 12844.003 - 12896.643: 95.6355% ( 4) 00:10:08.182 12896.643 - 12949.282: 95.6505% ( 2) 00:10:08.182 12949.282 - 13001.921: 95.6656% ( 2) 00:10:08.182 13001.921 - 13054.561: 95.6956% ( 4) 00:10:08.182 13054.561 - 13107.200: 95.7482% ( 7) 00:10:08.182 13107.200 - 13159.839: 95.8083% ( 8) 00:10:08.182 13159.839 - 13212.479: 95.8834% ( 10) 00:10:08.182 13212.479 - 13265.118: 96.0862% ( 27) 00:10:08.182 13265.118 - 13317.757: 96.1764% ( 12) 00:10:08.182 13317.757 - 13370.397: 96.2891% ( 15) 00:10:08.182 13370.397 - 13423.036: 96.3792% ( 12) 00:10:08.182 13423.036 - 13475.676: 96.4919% ( 15) 00:10:08.182 13475.676 - 13580.954: 96.8149% ( 43) 00:10:08.182 13580.954 - 13686.233: 97.0928% ( 37) 00:10:08.182 13686.233 - 13791.512: 97.2882% ( 26) 00:10:08.182 13791.512 - 13896.790: 97.3783% ( 12) 00:10:08.182 13896.790 - 14002.069: 97.4234% ( 6) 00:10:08.182 14002.069 - 14107.348: 97.4760% ( 7) 00:10:08.182 14107.348 - 14212.627: 97.5060% ( 4) 00:10:08.182 14212.627 - 14317.905: 97.5361% ( 4) 00:10:08.182 14317.905 - 14423.184: 97.5586% ( 3) 00:10:08.182 14423.184 - 14528.463: 97.5962% ( 5) 00:10:08.182 15370.692 - 15475.971: 97.6037% ( 1) 00:10:08.182 15475.971 - 15581.250: 97.6337% ( 4) 00:10:08.182 15581.250 - 15686.529: 97.6638% ( 4) 00:10:08.182 15686.529 - 15791.807: 97.7163% ( 7) 00:10:08.182 15791.807 - 15897.086: 97.7764% ( 8) 00:10:08.182 15897.086 - 16002.365: 97.9192% ( 19) 00:10:08.182 16002.365 - 16107.643: 97.9718% ( 7) 00:10:08.182 16107.643 - 16212.922: 98.0919% ( 16) 00:10:08.182 16212.922 - 16318.201: 98.2121% ( 16) 00:10:08.182 16318.201 - 16423.480: 98.3098% ( 13) 00:10:08.182 16423.480 - 16528.758: 98.3398% ( 4) 00:10:08.182 16528.758 - 16634.037: 98.3624% ( 3) 00:10:08.182 16634.037 - 16739.316: 98.3999% ( 5) 00:10:08.182 16739.316 - 16844.594: 98.4450% ( 6) 00:10:08.182 16844.594 - 16949.873: 98.4826% ( 5) 00:10:08.182 16949.873 - 17055.152: 98.5577% ( 10) 00:10:08.182 17055.152 - 17160.431: 98.6478% ( 12) 00:10:08.182 17160.431 - 17265.709: 98.7755% ( 17) 00:10:08.182 17265.709 - 17370.988: 98.9709% ( 26) 00:10:08.182 17370.988 - 17476.267: 99.0385% ( 9) 00:10:08.182 27793.581 - 28004.138: 99.0535% ( 2) 00:10:08.182 28004.138 - 28214.696: 99.1061% ( 7) 00:10:08.182 28214.696 - 28425.253: 99.1662% ( 8) 00:10:08.182 28425.253 - 28635.810: 99.2263% ( 8) 00:10:08.182 28635.810 - 28846.368: 99.2788% ( 7) 00:10:08.182 28846.368 - 29056.925: 99.3389% ( 8) 00:10:08.182 29056.925 - 29267.483: 99.3990% ( 8) 00:10:08.182 29267.483 - 29478.040: 99.4591% ( 8) 00:10:08.182 29478.040 - 29688.598: 99.5192% ( 8) 00:10:08.182 35794.763 - 36005.320: 99.5718% ( 7) 00:10:08.182 36005.320 - 36215.878: 99.6319% ( 8) 00:10:08.182 36215.878 - 36426.435: 99.6770% ( 6) 00:10:08.182 36426.435 - 36636.993: 99.7296% ( 7) 00:10:08.182 36636.993 - 36847.550: 99.7897% ( 8) 00:10:08.182 36847.550 - 37058.108: 99.8498% ( 8) 00:10:08.182 37058.108 - 37268.665: 99.9023% ( 7) 00:10:08.182 37268.665 - 37479.222: 99.9624% ( 8) 00:10:08.182 37479.222 - 37689.780: 100.0000% ( 5) 00:10:08.182 00:10:08.182 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:08.182 ============================================================================== 00:10:08.182 Range in us Cumulative IO count 00:10:08.182 6948.395 - 7001.035: 0.0075% ( 1) 00:10:08.182 7106.313 - 7158.953: 0.0150% ( 1) 00:10:08.182 7158.953 - 7211.592: 0.0972% ( 11) 00:10:08.182 7211.592 - 7264.231: 0.2467% ( 20) 00:10:08.182 7264.231 - 7316.871: 0.4710% ( 30) 00:10:08.182 7316.871 - 7369.510: 0.7476% ( 37) 00:10:08.182 7369.510 - 7422.149: 0.9196% ( 23) 00:10:08.182 7422.149 - 7474.789: 1.1663% ( 33) 00:10:08.182 7474.789 - 7527.428: 1.5775% ( 55) 00:10:08.182 7527.428 - 7580.067: 2.2653% ( 92) 00:10:08.182 7580.067 - 7632.707: 3.1773% ( 122) 00:10:08.182 7632.707 - 7685.346: 4.3436% ( 156) 00:10:08.182 7685.346 - 7737.986: 5.5921% ( 167) 00:10:08.182 7737.986 - 7790.625: 6.6836% ( 146) 00:10:08.182 7790.625 - 7843.264: 8.2835% ( 214) 00:10:08.182 7843.264 - 7895.904: 9.3002% ( 136) 00:10:08.182 7895.904 - 7948.543: 10.4217% ( 150) 00:10:08.182 7948.543 - 8001.182: 11.4234% ( 134) 00:10:08.182 8001.182 - 8053.822: 12.3131% ( 119) 00:10:08.182 8053.822 - 8106.461: 13.2551% ( 126) 00:10:08.182 8106.461 - 8159.100: 14.1447% ( 119) 00:10:08.182 8159.100 - 8211.740: 15.0269% ( 118) 00:10:08.182 8211.740 - 8264.379: 16.0586% ( 138) 00:10:08.182 8264.379 - 8317.018: 17.2996% ( 166) 00:10:08.182 8317.018 - 8369.658: 18.7500% ( 194) 00:10:08.182 8369.658 - 8422.297: 20.4770% ( 231) 00:10:08.182 8422.297 - 8474.937: 22.1591% ( 225) 00:10:08.182 8474.937 - 8527.576: 23.8861% ( 231) 00:10:08.182 8527.576 - 8580.215: 25.7028% ( 243) 00:10:08.182 8580.215 - 8632.855: 28.0278% ( 311) 00:10:08.182 8632.855 - 8685.494: 30.1809% ( 288) 00:10:08.182 8685.494 - 8738.133: 32.8648% ( 359) 00:10:08.182 8738.133 - 8790.773: 35.1824% ( 310) 00:10:08.182 8790.773 - 8843.412: 37.5822% ( 321) 00:10:08.182 8843.412 - 8896.051: 39.5185% ( 259) 00:10:08.182 8896.051 - 8948.691: 41.3278% ( 242) 00:10:08.182 8948.691 - 9001.330: 43.6603% ( 312) 00:10:08.182 9001.330 - 9053.969: 45.6265% ( 263) 00:10:08.182 9053.969 - 9106.609: 47.6376% ( 269) 00:10:08.182 9106.609 - 9159.248: 49.3346% ( 227) 00:10:08.182 9159.248 - 9211.888: 50.7327% ( 187) 00:10:08.182 9211.888 - 9264.527: 52.1008% ( 183) 00:10:08.182 9264.527 - 9317.166: 53.4166% ( 176) 00:10:08.182 9317.166 - 9369.806: 54.9267% ( 202) 00:10:08.182 9369.806 - 9422.445: 56.3397% ( 189) 00:10:08.182 9422.445 - 9475.084: 58.0742% ( 232) 00:10:08.182 9475.084 - 9527.724: 59.6666% ( 213) 00:10:08.182 9527.724 - 9580.363: 61.6403% ( 264) 00:10:08.182 9580.363 - 9633.002: 63.3597% ( 230) 00:10:08.182 9633.002 - 9685.642: 64.9671% ( 215) 00:10:08.182 9685.642 - 9738.281: 66.3203% ( 181) 00:10:08.182 9738.281 - 9790.920: 67.5538% ( 165) 00:10:08.182 9790.920 - 9843.560: 68.8472% ( 173) 00:10:08.182 9843.560 - 9896.199: 70.1854% ( 179) 00:10:08.182 9896.199 - 9948.839: 71.3965% ( 162) 00:10:08.182 9948.839 - 10001.478: 73.0861% ( 226) 00:10:08.182 10001.478 - 10054.117: 74.4617% ( 184) 00:10:08.182 10054.117 - 10106.757: 76.2485% ( 239) 00:10:08.182 10106.757 - 10159.396: 77.8260% ( 211) 00:10:08.182 10159.396 - 10212.035: 79.3810% ( 208) 00:10:08.182 10212.035 - 10264.675: 80.4650% ( 145) 00:10:08.182 10264.675 - 10317.314: 81.5191% ( 141) 00:10:08.182 10317.314 - 10369.953: 82.7303% ( 162) 00:10:08.182 10369.953 - 10422.593: 84.0161% ( 172) 00:10:08.182 10422.593 - 10475.232: 85.1450% ( 151) 00:10:08.182 10475.232 - 10527.871: 86.2365% ( 146) 00:10:08.182 10527.871 - 10580.511: 87.2084% ( 130) 00:10:08.182 10580.511 - 10633.150: 88.0084% ( 107) 00:10:08.182 10633.150 - 10685.790: 88.8457% ( 112) 00:10:08.182 10685.790 - 10738.429: 89.9671% ( 150) 00:10:08.182 10738.429 - 10791.068: 90.6773% ( 95) 00:10:08.182 10791.068 - 10843.708: 91.1708% ( 66) 00:10:08.182 10843.708 - 10896.347: 91.4175% ( 33) 00:10:08.182 10896.347 - 10948.986: 91.6492% ( 31) 00:10:08.182 10948.986 - 11001.626: 91.8286% ( 24) 00:10:08.182 11001.626 - 11054.265: 92.0156% ( 25) 00:10:08.182 11054.265 - 11106.904: 92.3146% ( 40) 00:10:08.182 11106.904 - 11159.544: 92.6286% ( 42) 00:10:08.182 11159.544 - 11212.183: 92.7183% ( 12) 00:10:08.182 11212.183 - 11264.822: 92.7856% ( 9) 00:10:08.182 11264.822 - 11317.462: 92.8155% ( 4) 00:10:08.182 11317.462 - 11370.101: 92.8454% ( 4) 00:10:08.182 11475.380 - 11528.019: 92.8603% ( 2) 00:10:08.182 11528.019 - 11580.659: 92.9052% ( 6) 00:10:08.182 11580.659 - 11633.298: 92.9874% ( 11) 00:10:08.182 11633.298 - 11685.937: 93.0921% ( 14) 00:10:08.182 11685.937 - 11738.577: 93.4061% ( 42) 00:10:08.182 11738.577 - 11791.216: 93.7276% ( 43) 00:10:08.182 11791.216 - 11843.855: 94.0042% ( 37) 00:10:08.182 11843.855 - 11896.495: 94.1089% ( 14) 00:10:08.182 11896.495 - 11949.134: 94.1612% ( 7) 00:10:08.182 11949.134 - 12001.773: 94.1986% ( 5) 00:10:08.182 12001.773 - 12054.413: 94.2359% ( 5) 00:10:08.182 12054.413 - 12107.052: 94.2958% ( 8) 00:10:08.182 12107.052 - 12159.692: 94.3406% ( 6) 00:10:08.182 12159.692 - 12212.331: 94.3855% ( 6) 00:10:08.182 12212.331 - 12264.970: 94.4378% ( 7) 00:10:08.182 12264.970 - 12317.610: 94.6023% ( 22) 00:10:08.182 12317.610 - 12370.249: 94.6397% ( 5) 00:10:08.182 12370.249 - 12422.888: 94.6546% ( 2) 00:10:08.182 12422.888 - 12475.528: 94.6696% ( 2) 00:10:08.182 12475.528 - 12528.167: 94.6920% ( 3) 00:10:08.182 12528.167 - 12580.806: 94.7069% ( 2) 00:10:08.182 12580.806 - 12633.446: 94.7219% ( 2) 00:10:08.182 12633.446 - 12686.085: 94.7443% ( 3) 00:10:08.182 12686.085 - 12738.724: 94.7817% ( 5) 00:10:08.182 12738.724 - 12791.364: 94.8565% ( 10) 00:10:08.182 12791.364 - 12844.003: 94.9387% ( 11) 00:10:08.182 12844.003 - 12896.643: 95.0284% ( 12) 00:10:08.182 12896.643 - 12949.282: 95.1256% ( 13) 00:10:08.182 12949.282 - 13001.921: 95.2602% ( 18) 00:10:08.182 13001.921 - 13054.561: 95.3349% ( 10) 00:10:08.182 13054.561 - 13107.200: 95.4022% ( 9) 00:10:08.182 13107.200 - 13159.839: 95.4695% ( 9) 00:10:08.182 13159.839 - 13212.479: 95.5667% ( 13) 00:10:08.182 13212.479 - 13265.118: 95.6863% ( 16) 00:10:08.182 13265.118 - 13317.757: 95.8134% ( 17) 00:10:08.182 13317.757 - 13370.397: 96.0302% ( 29) 00:10:08.182 13370.397 - 13423.036: 96.1423% ( 15) 00:10:08.182 13423.036 - 13475.676: 96.2919% ( 20) 00:10:08.182 13475.676 - 13580.954: 96.5161% ( 30) 00:10:08.182 13580.954 - 13686.233: 96.6432% ( 17) 00:10:08.182 13686.233 - 13791.512: 96.7105% ( 9) 00:10:08.182 13791.512 - 13896.790: 96.7703% ( 8) 00:10:08.182 13896.790 - 14002.069: 96.8825% ( 15) 00:10:08.182 14002.069 - 14107.348: 97.1217% ( 32) 00:10:08.182 14107.348 - 14212.627: 97.3236% ( 27) 00:10:08.182 14212.627 - 14317.905: 97.4507% ( 17) 00:10:08.182 14317.905 - 14423.184: 97.5628% ( 15) 00:10:08.182 14423.184 - 14528.463: 97.6002% ( 5) 00:10:08.182 14528.463 - 14633.741: 97.6077% ( 1) 00:10:08.182 15475.971 - 15581.250: 97.6525% ( 6) 00:10:08.182 15581.250 - 15686.529: 97.7273% ( 10) 00:10:08.182 15686.529 - 15791.807: 97.8469% ( 16) 00:10:08.182 15791.807 - 15897.086: 97.9665% ( 16) 00:10:08.182 15897.086 - 16002.365: 98.0712% ( 14) 00:10:08.182 16002.365 - 16107.643: 98.2207% ( 20) 00:10:08.182 16107.643 - 16212.922: 98.3553% ( 18) 00:10:08.182 16212.922 - 16318.201: 98.4225% ( 9) 00:10:08.182 16318.201 - 16423.480: 98.4674% ( 6) 00:10:08.182 16423.480 - 16528.758: 98.5048% ( 5) 00:10:08.182 16528.758 - 16634.037: 98.5496% ( 6) 00:10:08.182 16634.037 - 16739.316: 98.5646% ( 2) 00:10:08.182 17476.267 - 17581.545: 98.6020% ( 5) 00:10:08.182 17581.545 - 17686.824: 98.6842% ( 11) 00:10:08.182 17686.824 - 17792.103: 98.7889% ( 14) 00:10:08.182 17792.103 - 17897.382: 98.9085% ( 16) 00:10:08.182 17897.382 - 18002.660: 98.9907% ( 11) 00:10:08.182 18002.660 - 18107.939: 99.0356% ( 6) 00:10:08.182 18107.939 - 18213.218: 99.0431% ( 1) 00:10:08.182 18213.218 - 18318.496: 99.0505% ( 1) 00:10:08.182 18318.496 - 18423.775: 99.0804% ( 4) 00:10:08.183 18423.775 - 18529.054: 99.1103% ( 4) 00:10:08.183 18529.054 - 18634.333: 99.1403% ( 4) 00:10:08.183 18634.333 - 18739.611: 99.1702% ( 4) 00:10:08.183 18739.611 - 18844.890: 99.2001% ( 4) 00:10:08.183 18844.890 - 18950.169: 99.2225% ( 3) 00:10:08.183 18950.169 - 19055.447: 99.2524% ( 4) 00:10:08.183 19055.447 - 19160.726: 99.2823% ( 4) 00:10:08.183 19160.726 - 19266.005: 99.3122% ( 4) 00:10:08.183 19266.005 - 19371.284: 99.3346% ( 3) 00:10:08.183 19371.284 - 19476.562: 99.3645% ( 4) 00:10:08.183 19476.562 - 19581.841: 99.3944% ( 4) 00:10:08.183 19581.841 - 19687.120: 99.4243% ( 4) 00:10:08.183 19687.120 - 19792.398: 99.4542% ( 4) 00:10:08.183 19792.398 - 19897.677: 99.4842% ( 4) 00:10:08.183 19897.677 - 20002.956: 99.5066% ( 3) 00:10:08.183 20002.956 - 20108.235: 99.5215% ( 2) 00:10:08.183 28004.138 - 28214.696: 99.5589% ( 5) 00:10:08.183 28214.696 - 28425.253: 99.6187% ( 8) 00:10:08.183 28425.253 - 28635.810: 99.6711% ( 7) 00:10:08.183 28635.810 - 28846.368: 99.7234% ( 7) 00:10:08.183 28846.368 - 29056.925: 99.7832% ( 8) 00:10:08.183 29056.925 - 29267.483: 99.8355% ( 7) 00:10:08.183 29267.483 - 29478.040: 99.8953% ( 8) 00:10:08.183 29478.040 - 29688.598: 99.9477% ( 7) 00:10:08.183 29688.598 - 29899.155: 100.0000% ( 7) 00:10:08.183 00:10:08.183 ************************************ 00:10:08.183 END TEST nvme_perf 00:10:08.183 ************************************ 00:10:08.183 11:13:45 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:08.183 00:10:08.183 real 0m2.740s 00:10:08.183 user 0m2.303s 00:10:08.183 sys 0m0.329s 00:10:08.183 11:13:45 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.183 11:13:45 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:08.441 11:13:45 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:08.441 11:13:45 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:08.441 11:13:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.441 11:13:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.441 ************************************ 00:10:08.441 START TEST nvme_hello_world 00:10:08.441 ************************************ 00:10:08.441 11:13:45 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:08.701 Initializing NVMe Controllers 00:10:08.701 Attached to 0000:00:10.0 00:10:08.701 Namespace ID: 1 size: 6GB 00:10:08.701 Attached to 0000:00:11.0 00:10:08.701 Namespace ID: 1 size: 5GB 00:10:08.701 Attached to 0000:00:13.0 00:10:08.701 Namespace ID: 1 size: 1GB 00:10:08.701 Attached to 0000:00:12.0 00:10:08.701 Namespace ID: 1 size: 4GB 00:10:08.701 Namespace ID: 2 size: 4GB 00:10:08.701 Namespace ID: 3 size: 4GB 00:10:08.701 Initialization complete. 00:10:08.701 INFO: using host memory buffer for IO 00:10:08.701 Hello world! 00:10:08.701 INFO: using host memory buffer for IO 00:10:08.701 Hello world! 00:10:08.701 INFO: using host memory buffer for IO 00:10:08.701 Hello world! 00:10:08.701 INFO: using host memory buffer for IO 00:10:08.701 Hello world! 00:10:08.701 INFO: using host memory buffer for IO 00:10:08.701 Hello world! 00:10:08.701 INFO: using host memory buffer for IO 00:10:08.701 Hello world! 00:10:08.701 ************************************ 00:10:08.701 END TEST nvme_hello_world 00:10:08.701 ************************************ 00:10:08.701 00:10:08.701 real 0m0.324s 00:10:08.701 user 0m0.130s 00:10:08.701 sys 0m0.145s 00:10:08.701 11:13:45 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.701 11:13:45 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:08.701 11:13:45 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:08.701 11:13:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:08.701 11:13:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.701 11:13:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.701 ************************************ 00:10:08.701 START TEST nvme_sgl 00:10:08.701 ************************************ 00:10:08.701 11:13:45 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:08.960 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:08.960 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:08.960 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:08.960 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:08.960 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:08.960 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:08.960 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:08.960 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:08.960 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:08.960 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:08.960 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:08.960 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:08.960 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:08.960 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:09.219 NVMe Readv/Writev Request test 00:10:09.219 Attached to 0000:00:10.0 00:10:09.219 Attached to 0000:00:11.0 00:10:09.219 Attached to 0000:00:13.0 00:10:09.219 Attached to 0000:00:12.0 00:10:09.219 0000:00:10.0: build_io_request_2 test passed 00:10:09.219 0000:00:10.0: build_io_request_4 test passed 00:10:09.219 0000:00:10.0: build_io_request_5 test passed 00:10:09.219 0000:00:10.0: build_io_request_6 test passed 00:10:09.219 0000:00:10.0: build_io_request_7 test passed 00:10:09.219 0000:00:10.0: build_io_request_10 test passed 00:10:09.219 0000:00:11.0: build_io_request_2 test passed 00:10:09.219 0000:00:11.0: build_io_request_4 test passed 00:10:09.219 0000:00:11.0: build_io_request_5 test passed 00:10:09.219 0000:00:11.0: build_io_request_6 test passed 00:10:09.219 0000:00:11.0: build_io_request_7 test passed 00:10:09.219 0000:00:11.0: build_io_request_10 test passed 00:10:09.219 Cleaning up... 00:10:09.219 ************************************ 00:10:09.219 END TEST nvme_sgl 00:10:09.219 ************************************ 00:10:09.219 00:10:09.219 real 0m0.386s 00:10:09.219 user 0m0.176s 00:10:09.219 sys 0m0.159s 00:10:09.219 11:13:46 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.219 11:13:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:09.219 11:13:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:09.219 11:13:46 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:09.219 11:13:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.219 11:13:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.219 ************************************ 00:10:09.219 START TEST nvme_e2edp 00:10:09.219 ************************************ 00:10:09.219 11:13:46 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:09.479 NVMe Write/Read with End-to-End data protection test 00:10:09.479 Attached to 0000:00:10.0 00:10:09.479 Attached to 0000:00:11.0 00:10:09.479 Attached to 0000:00:13.0 00:10:09.479 Attached to 0000:00:12.0 00:10:09.479 Cleaning up... 00:10:09.479 ************************************ 00:10:09.479 END TEST nvme_e2edp 00:10:09.479 ************************************ 00:10:09.479 00:10:09.479 real 0m0.297s 00:10:09.479 user 0m0.103s 00:10:09.479 sys 0m0.150s 00:10:09.479 11:13:46 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.479 11:13:46 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:09.479 11:13:46 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:09.479 11:13:46 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:09.479 11:13:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.479 11:13:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.479 ************************************ 00:10:09.479 START TEST nvme_reserve 00:10:09.479 ************************************ 00:10:09.479 11:13:46 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:09.739 ===================================================== 00:10:09.739 NVMe Controller at PCI bus 0, device 16, function 0 00:10:09.739 ===================================================== 00:10:09.739 Reservations: Not Supported 00:10:09.739 ===================================================== 00:10:09.739 NVMe Controller at PCI bus 0, device 17, function 0 00:10:09.739 ===================================================== 00:10:09.739 Reservations: Not Supported 00:10:09.739 ===================================================== 00:10:09.739 NVMe Controller at PCI bus 0, device 19, function 0 00:10:09.739 ===================================================== 00:10:09.739 Reservations: Not Supported 00:10:09.739 ===================================================== 00:10:09.739 NVMe Controller at PCI bus 0, device 18, function 0 00:10:09.739 ===================================================== 00:10:09.739 Reservations: Not Supported 00:10:09.739 Reservation test passed 00:10:09.739 ************************************ 00:10:09.739 END TEST nvme_reserve 00:10:09.739 ************************************ 00:10:09.739 00:10:09.739 real 0m0.321s 00:10:09.739 user 0m0.125s 00:10:09.739 sys 0m0.146s 00:10:09.739 11:13:47 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.739 11:13:47 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:09.997 11:13:47 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:09.997 11:13:47 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:09.997 11:13:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.997 11:13:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.997 ************************************ 00:10:09.997 START TEST nvme_err_injection 00:10:09.997 ************************************ 00:10:09.997 11:13:47 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:10.256 NVMe Error Injection test 00:10:10.256 Attached to 0000:00:10.0 00:10:10.256 Attached to 0000:00:11.0 00:10:10.256 Attached to 0000:00:13.0 00:10:10.256 Attached to 0000:00:12.0 00:10:10.256 0000:00:10.0: get features failed as expected 00:10:10.256 0000:00:11.0: get features failed as expected 00:10:10.256 0000:00:13.0: get features failed as expected 00:10:10.256 0000:00:12.0: get features failed as expected 00:10:10.256 0000:00:10.0: get features successfully as expected 00:10:10.256 0000:00:11.0: get features successfully as expected 00:10:10.256 0000:00:13.0: get features successfully as expected 00:10:10.256 0000:00:12.0: get features successfully as expected 00:10:10.256 0000:00:10.0: read failed as expected 00:10:10.256 0000:00:11.0: read failed as expected 00:10:10.256 0000:00:13.0: read failed as expected 00:10:10.256 0000:00:12.0: read failed as expected 00:10:10.256 0000:00:10.0: read successfully as expected 00:10:10.256 0000:00:11.0: read successfully as expected 00:10:10.256 0000:00:13.0: read successfully as expected 00:10:10.256 0000:00:12.0: read successfully as expected 00:10:10.256 Cleaning up... 00:10:10.256 ************************************ 00:10:10.256 END TEST nvme_err_injection 00:10:10.256 ************************************ 00:10:10.256 00:10:10.256 real 0m0.344s 00:10:10.256 user 0m0.128s 00:10:10.256 sys 0m0.167s 00:10:10.256 11:13:47 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.256 11:13:47 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:10.256 11:13:47 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:10.256 11:13:47 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:10:10.256 11:13:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.256 11:13:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.256 ************************************ 00:10:10.256 START TEST nvme_overhead 00:10:10.256 ************************************ 00:10:10.256 11:13:47 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:11.633 Initializing NVMe Controllers 00:10:11.633 Attached to 0000:00:10.0 00:10:11.633 Attached to 0000:00:11.0 00:10:11.633 Attached to 0000:00:13.0 00:10:11.633 Attached to 0000:00:12.0 00:10:11.633 Initialization complete. Launching workers. 00:10:11.633 submit (in ns) avg, min, max = 15573.9, 12618.5, 142465.1 00:10:11.633 complete (in ns) avg, min, max = 10446.6, 7966.3, 147946.2 00:10:11.633 00:10:11.633 Submit histogram 00:10:11.633 ================ 00:10:11.633 Range in us Cumulative Count 00:10:11.633 12.594 - 12.646: 0.0415% ( 3) 00:10:11.633 12.697 - 12.749: 0.0691% ( 2) 00:10:11.633 12.749 - 12.800: 0.1244% ( 4) 00:10:11.633 12.800 - 12.851: 0.2212% ( 7) 00:10:11.633 12.851 - 12.903: 0.2627% ( 3) 00:10:11.633 12.903 - 12.954: 0.3595% ( 7) 00:10:11.633 12.954 - 13.006: 0.4148% ( 4) 00:10:11.633 13.006 - 13.057: 0.5392% ( 9) 00:10:11.633 13.057 - 13.108: 0.6360% ( 7) 00:10:11.633 13.108 - 13.160: 0.7189% ( 6) 00:10:11.633 13.160 - 13.263: 0.8987% ( 13) 00:10:11.633 13.263 - 13.365: 1.2028% ( 22) 00:10:11.633 13.365 - 13.468: 2.2259% ( 74) 00:10:11.633 13.468 - 13.571: 4.0647% ( 133) 00:10:11.633 13.571 - 13.674: 8.0326% ( 287) 00:10:11.633 13.674 - 13.777: 13.1481% ( 370) 00:10:11.633 13.777 - 13.880: 21.2222% ( 584) 00:10:11.633 13.880 - 13.982: 30.6374% ( 681) 00:10:11.633 13.982 - 14.085: 40.9374% ( 745) 00:10:11.633 14.085 - 14.188: 50.1866% ( 669) 00:10:11.633 14.188 - 14.291: 58.7861% ( 622) 00:10:11.633 14.291 - 14.394: 65.5330% ( 488) 00:10:11.633 14.394 - 14.496: 70.5516% ( 363) 00:10:11.633 14.496 - 14.599: 74.2707% ( 269) 00:10:11.633 14.599 - 14.702: 76.7178% ( 177) 00:10:11.633 14.702 - 14.805: 78.8331% ( 153) 00:10:11.633 14.805 - 14.908: 80.1327% ( 94) 00:10:11.633 14.908 - 15.010: 81.1005% ( 70) 00:10:11.633 15.010 - 15.113: 82.0683% ( 70) 00:10:11.633 15.113 - 15.216: 82.6628% ( 43) 00:10:11.633 15.216 - 15.319: 83.0637% ( 29) 00:10:11.633 15.319 - 15.422: 83.4370% ( 27) 00:10:11.633 15.422 - 15.524: 83.7135% ( 20) 00:10:11.633 15.524 - 15.627: 83.8794% ( 12) 00:10:11.633 15.627 - 15.730: 84.0592% ( 13) 00:10:11.633 15.730 - 15.833: 84.2804% ( 16) 00:10:11.633 15.833 - 15.936: 84.4878% ( 15) 00:10:11.633 15.936 - 16.039: 84.6813% ( 14) 00:10:11.633 16.039 - 16.141: 84.8334% ( 11) 00:10:11.633 16.141 - 16.244: 84.9855% ( 11) 00:10:11.633 16.244 - 16.347: 85.1237% ( 10) 00:10:11.633 16.347 - 16.450: 85.3173% ( 14) 00:10:11.633 16.450 - 16.553: 85.4417% ( 9) 00:10:11.633 16.553 - 16.655: 85.5938% ( 11) 00:10:11.633 16.655 - 16.758: 85.6906% ( 7) 00:10:11.633 16.758 - 16.861: 85.7321% ( 3) 00:10:11.633 16.861 - 16.964: 85.8288% ( 7) 00:10:11.633 16.964 - 17.067: 85.8980% ( 5) 00:10:11.633 17.067 - 17.169: 85.9671% ( 5) 00:10:11.633 17.169 - 17.272: 86.0086% ( 3) 00:10:11.633 17.272 - 17.375: 86.0777% ( 5) 00:10:11.633 17.375 - 17.478: 86.1468% ( 5) 00:10:11.633 17.478 - 17.581: 86.2574% ( 8) 00:10:11.633 17.581 - 17.684: 86.3127% ( 4) 00:10:11.633 17.684 - 17.786: 86.4648% ( 11) 00:10:11.633 17.786 - 17.889: 86.5339% ( 5) 00:10:11.633 17.889 - 17.992: 86.6031% ( 5) 00:10:11.633 17.992 - 18.095: 86.7137% ( 8) 00:10:11.633 18.095 - 18.198: 86.7828% ( 5) 00:10:11.633 18.198 - 18.300: 86.8381% ( 4) 00:10:11.633 18.300 - 18.403: 86.9764% ( 10) 00:10:11.633 18.403 - 18.506: 87.0731% ( 7) 00:10:11.633 18.506 - 18.609: 87.1837% ( 8) 00:10:11.633 18.609 - 18.712: 87.3220% ( 10) 00:10:11.633 18.712 - 18.814: 87.4603% ( 10) 00:10:11.633 18.814 - 18.917: 87.5570% ( 7) 00:10:11.633 18.917 - 19.020: 87.7644% ( 15) 00:10:11.633 19.020 - 19.123: 87.9027% ( 10) 00:10:11.633 19.123 - 19.226: 88.0271% ( 9) 00:10:11.633 19.226 - 19.329: 88.1239% ( 7) 00:10:11.633 19.329 - 19.431: 88.2621% ( 10) 00:10:11.633 19.431 - 19.534: 88.3589% ( 7) 00:10:11.633 19.534 - 19.637: 88.5110% ( 11) 00:10:11.633 19.637 - 19.740: 88.7184% ( 15) 00:10:11.633 19.740 - 19.843: 88.8290% ( 8) 00:10:11.633 19.843 - 19.945: 89.0087% ( 13) 00:10:11.633 19.945 - 20.048: 89.2576% ( 18) 00:10:11.633 20.048 - 20.151: 89.5479% ( 21) 00:10:11.633 20.151 - 20.254: 89.7691% ( 16) 00:10:11.633 20.254 - 20.357: 89.9903% ( 16) 00:10:11.633 20.357 - 20.459: 90.3083% ( 23) 00:10:11.633 20.459 - 20.562: 90.6401% ( 24) 00:10:11.633 20.562 - 20.665: 90.9719% ( 24) 00:10:11.633 20.665 - 20.768: 91.1378% ( 12) 00:10:11.633 20.768 - 20.871: 91.4005% ( 19) 00:10:11.633 20.871 - 20.973: 91.7462% ( 25) 00:10:11.634 20.973 - 21.076: 92.0503% ( 22) 00:10:11.634 21.076 - 21.179: 92.4098% ( 26) 00:10:11.634 21.179 - 21.282: 92.7969% ( 28) 00:10:11.634 21.282 - 21.385: 93.0734% ( 20) 00:10:11.634 21.385 - 21.488: 93.3084% ( 17) 00:10:11.634 21.488 - 21.590: 93.5020% ( 14) 00:10:11.634 21.590 - 21.693: 93.5988% ( 7) 00:10:11.634 21.693 - 21.796: 93.8062% ( 15) 00:10:11.634 21.796 - 21.899: 93.9859% ( 13) 00:10:11.634 21.899 - 22.002: 94.1242% ( 10) 00:10:11.634 22.002 - 22.104: 94.2486% ( 9) 00:10:11.634 22.104 - 22.207: 94.3315% ( 6) 00:10:11.634 22.207 - 22.310: 94.5389% ( 15) 00:10:11.634 22.310 - 22.413: 94.6633% ( 9) 00:10:11.634 22.413 - 22.516: 94.7878% ( 9) 00:10:11.634 22.516 - 22.618: 94.8984% ( 8) 00:10:11.634 22.618 - 22.721: 94.9813% ( 6) 00:10:11.634 22.721 - 22.824: 95.0643% ( 6) 00:10:11.634 22.824 - 22.927: 95.1749% ( 8) 00:10:11.634 22.927 - 23.030: 95.2302% ( 4) 00:10:11.634 23.030 - 23.133: 95.2578% ( 2) 00:10:11.634 23.133 - 23.235: 95.2993% ( 3) 00:10:11.634 23.235 - 23.338: 95.4514% ( 11) 00:10:11.634 23.338 - 23.441: 95.5067% ( 4) 00:10:11.634 23.441 - 23.544: 95.5897% ( 6) 00:10:11.634 23.544 - 23.647: 95.6450% ( 4) 00:10:11.634 23.647 - 23.749: 95.6726% ( 2) 00:10:11.634 23.749 - 23.852: 95.7970% ( 9) 00:10:11.634 23.852 - 23.955: 95.8800% ( 6) 00:10:11.634 23.955 - 24.058: 95.9629% ( 6) 00:10:11.634 24.058 - 24.161: 96.0182% ( 4) 00:10:11.634 24.161 - 24.263: 96.0736% ( 4) 00:10:11.634 24.263 - 24.366: 96.1565% ( 6) 00:10:11.634 24.366 - 24.469: 96.2118% ( 4) 00:10:11.634 24.469 - 24.572: 96.2948% ( 6) 00:10:11.634 24.572 - 24.675: 96.3915% ( 7) 00:10:11.634 24.675 - 24.778: 96.4330% ( 3) 00:10:11.634 24.778 - 24.880: 96.5021% ( 5) 00:10:11.634 24.880 - 24.983: 96.5436% ( 3) 00:10:11.634 24.983 - 25.086: 96.7095% ( 12) 00:10:11.634 25.086 - 25.189: 96.7787% ( 5) 00:10:11.634 25.189 - 25.292: 96.8754% ( 7) 00:10:11.634 25.292 - 25.394: 96.9999% ( 9) 00:10:11.634 25.394 - 25.497: 97.0275% ( 2) 00:10:11.634 25.497 - 25.600: 97.0413% ( 1) 00:10:11.634 25.600 - 25.703: 97.1243% ( 6) 00:10:11.634 25.806 - 25.908: 97.1796% ( 4) 00:10:11.634 25.908 - 26.011: 97.2072% ( 2) 00:10:11.634 26.011 - 26.114: 97.2211% ( 1) 00:10:11.634 26.114 - 26.217: 97.2764% ( 4) 00:10:11.634 26.217 - 26.320: 97.2902% ( 1) 00:10:11.634 26.320 - 26.525: 97.3455% ( 4) 00:10:11.634 26.525 - 26.731: 97.3732% ( 2) 00:10:11.634 26.731 - 26.937: 97.4699% ( 7) 00:10:11.634 26.937 - 27.142: 97.5529% ( 6) 00:10:11.634 27.142 - 27.348: 97.6220% ( 5) 00:10:11.634 27.348 - 27.553: 97.6497% ( 2) 00:10:11.634 27.553 - 27.759: 97.7050% ( 4) 00:10:11.634 27.759 - 27.965: 97.7464% ( 3) 00:10:11.634 27.965 - 28.170: 97.8017% ( 4) 00:10:11.634 28.170 - 28.376: 97.8985% ( 7) 00:10:11.634 28.376 - 28.582: 97.9953% ( 7) 00:10:11.634 28.582 - 28.787: 98.0644% ( 5) 00:10:11.634 28.787 - 28.993: 98.1197% ( 4) 00:10:11.634 28.993 - 29.198: 98.1474% ( 2) 00:10:11.634 29.198 - 29.404: 98.1612% ( 1) 00:10:11.634 29.404 - 29.610: 98.1889% ( 2) 00:10:11.634 29.610 - 29.815: 98.2165% ( 2) 00:10:11.634 29.815 - 30.021: 98.2718% ( 4) 00:10:11.634 30.021 - 30.227: 98.3133% ( 3) 00:10:11.634 30.227 - 30.432: 98.3271% ( 1) 00:10:11.634 30.432 - 30.638: 98.4239% ( 7) 00:10:11.634 30.638 - 30.843: 98.4515% ( 2) 00:10:11.634 30.843 - 31.049: 98.4930% ( 3) 00:10:11.634 31.049 - 31.255: 98.5207% ( 2) 00:10:11.634 31.255 - 31.460: 98.5621% ( 3) 00:10:11.634 31.871 - 32.077: 98.5898% ( 2) 00:10:11.634 32.077 - 32.283: 98.6174% ( 2) 00:10:11.634 32.283 - 32.488: 98.6589% ( 3) 00:10:11.634 32.488 - 32.694: 98.7004% ( 3) 00:10:11.634 32.694 - 32.900: 98.7142% ( 1) 00:10:11.634 32.900 - 33.105: 98.7419% ( 2) 00:10:11.634 33.105 - 33.311: 98.7695% ( 2) 00:10:11.634 33.311 - 33.516: 98.8248% ( 4) 00:10:11.634 33.516 - 33.722: 98.8525% ( 2) 00:10:11.634 33.722 - 33.928: 98.8940% ( 3) 00:10:11.634 33.928 - 34.133: 98.9078% ( 1) 00:10:11.634 34.133 - 34.339: 98.9354% ( 2) 00:10:11.634 34.339 - 34.545: 98.9493% ( 1) 00:10:11.634 34.545 - 34.750: 98.9907% ( 3) 00:10:11.634 34.750 - 34.956: 99.0322% ( 3) 00:10:11.634 34.956 - 35.161: 99.0599% ( 2) 00:10:11.634 35.161 - 35.367: 99.1013% ( 3) 00:10:11.634 35.778 - 35.984: 99.1428% ( 3) 00:10:11.634 35.984 - 36.190: 99.1705% ( 2) 00:10:11.634 36.190 - 36.395: 99.1843% ( 1) 00:10:11.634 36.395 - 36.601: 99.2396% ( 4) 00:10:11.634 36.601 - 36.806: 99.2534% ( 1) 00:10:11.634 36.806 - 37.012: 99.2949% ( 3) 00:10:11.634 37.218 - 37.423: 99.3225% ( 2) 00:10:11.634 37.423 - 37.629: 99.3502% ( 2) 00:10:11.634 37.629 - 37.835: 99.3640% ( 1) 00:10:11.634 37.835 - 38.040: 99.3779% ( 1) 00:10:11.634 38.040 - 38.246: 99.3917% ( 1) 00:10:11.634 38.246 - 38.451: 99.4055% ( 1) 00:10:11.634 38.451 - 38.657: 99.4193% ( 1) 00:10:11.634 38.657 - 38.863: 99.4332% ( 1) 00:10:11.634 38.863 - 39.068: 99.4608% ( 2) 00:10:11.634 39.068 - 39.274: 99.4885% ( 2) 00:10:11.634 39.274 - 39.480: 99.5161% ( 2) 00:10:11.634 39.480 - 39.685: 99.5299% ( 1) 00:10:11.634 39.685 - 39.891: 99.5438% ( 1) 00:10:11.634 40.302 - 40.508: 99.5576% ( 1) 00:10:11.634 40.713 - 40.919: 99.5991% ( 3) 00:10:11.634 40.919 - 41.124: 99.6129% ( 1) 00:10:11.634 41.124 - 41.330: 99.6267% ( 1) 00:10:11.634 41.536 - 41.741: 99.6405% ( 1) 00:10:11.634 42.153 - 42.358: 99.6544% ( 1) 00:10:11.634 43.592 - 43.798: 99.6820% ( 2) 00:10:11.634 44.826 - 45.031: 99.6958% ( 1) 00:10:11.634 45.031 - 45.237: 99.7097% ( 1) 00:10:11.634 45.443 - 45.648: 99.7235% ( 1) 00:10:11.634 46.471 - 46.676: 99.7373% ( 1) 00:10:11.634 48.527 - 48.733: 99.7511% ( 1) 00:10:11.634 49.349 - 49.555: 99.7650% ( 1) 00:10:11.634 50.994 - 51.200: 99.7788% ( 1) 00:10:11.634 52.228 - 52.434: 99.7926% ( 1) 00:10:11.634 53.462 - 53.873: 99.8064% ( 1) 00:10:11.634 54.284 - 54.696: 99.8203% ( 1) 00:10:11.634 55.518 - 55.929: 99.8341% ( 1) 00:10:11.634 58.397 - 58.808: 99.8479% ( 1) 00:10:11.634 59.219 - 59.631: 99.8617% ( 1) 00:10:11.634 61.276 - 61.687: 99.8756% ( 1) 00:10:11.634 61.687 - 62.098: 99.8894% ( 1) 00:10:11.634 68.267 - 68.678: 99.9032% ( 1) 00:10:11.634 72.379 - 72.790: 99.9170% ( 1) 00:10:11.634 90.885 - 91.296: 99.9309% ( 1) 00:10:11.634 91.296 - 91.708: 99.9447% ( 1) 00:10:11.634 94.586 - 94.998: 99.9585% ( 1) 00:10:11.634 101.578 - 101.989: 99.9723% ( 1) 00:10:11.634 108.569 - 109.391: 99.9862% ( 1) 00:10:11.634 142.291 - 143.113: 100.0000% ( 1) 00:10:11.634 00:10:11.634 Complete histogram 00:10:11.634 ================== 00:10:11.634 Range in us Cumulative Count 00:10:11.634 7.916 - 7.968: 0.0138% ( 1) 00:10:11.634 7.968 - 8.019: 0.0691% ( 4) 00:10:11.635 8.019 - 8.071: 0.1383% ( 5) 00:10:11.635 8.071 - 8.122: 0.1936% ( 4) 00:10:11.635 8.122 - 8.173: 0.3180% ( 9) 00:10:11.635 8.173 - 8.225: 0.4148% ( 7) 00:10:11.635 8.225 - 8.276: 0.5254% ( 8) 00:10:11.635 8.276 - 8.328: 0.5945% ( 5) 00:10:11.635 8.328 - 8.379: 0.7051% ( 8) 00:10:11.635 8.379 - 8.431: 0.8019% ( 7) 00:10:11.635 8.431 - 8.482: 1.0231% ( 16) 00:10:11.635 8.482 - 8.533: 1.1752% ( 11) 00:10:11.635 8.533 - 8.585: 1.3826% ( 15) 00:10:11.635 8.585 - 8.636: 1.7697% ( 28) 00:10:11.635 8.636 - 8.688: 2.3918% ( 45) 00:10:11.635 8.688 - 8.739: 3.1660% ( 56) 00:10:11.635 8.739 - 8.790: 3.7467% ( 42) 00:10:11.635 8.790 - 8.842: 4.3965% ( 47) 00:10:11.635 8.842 - 8.893: 4.7698% ( 27) 00:10:11.635 8.893 - 8.945: 5.0740% ( 22) 00:10:11.635 8.945 - 8.996: 5.4196% ( 25) 00:10:11.635 8.996 - 9.047: 5.7099% ( 21) 00:10:11.635 9.047 - 9.099: 6.0832% ( 27) 00:10:11.635 9.099 - 9.150: 6.3736% ( 21) 00:10:11.635 9.150 - 9.202: 7.1340% ( 55) 00:10:11.635 9.202 - 9.253: 9.6640% ( 183) 00:10:11.635 9.253 - 9.304: 15.2219% ( 402) 00:10:11.635 9.304 - 9.356: 22.2453% ( 508) 00:10:11.635 9.356 - 9.407: 29.5037% ( 525) 00:10:11.635 9.407 - 9.459: 37.9096% ( 608) 00:10:11.635 9.459 - 9.510: 46.2740% ( 605) 00:10:11.635 9.510 - 9.561: 53.6707% ( 535) 00:10:11.635 9.561 - 9.613: 59.5880% ( 428) 00:10:11.635 9.613 - 9.664: 64.1228% ( 328) 00:10:11.635 9.664 - 9.716: 67.1367% ( 218) 00:10:11.635 9.716 - 9.767: 69.7636% ( 190) 00:10:11.635 9.767 - 9.818: 72.3766% ( 189) 00:10:11.635 9.818 - 9.870: 74.5196% ( 155) 00:10:11.635 9.870 - 9.921: 76.3860% ( 135) 00:10:11.635 9.921 - 9.973: 78.4045% ( 146) 00:10:11.635 9.973 - 10.024: 80.1742% ( 128) 00:10:11.635 10.024 - 10.076: 81.7641% ( 115) 00:10:11.635 10.076 - 10.127: 82.8702% ( 80) 00:10:11.635 10.127 - 10.178: 83.5338% ( 48) 00:10:11.635 10.178 - 10.230: 83.9900% ( 33) 00:10:11.635 10.230 - 10.281: 84.4739% ( 35) 00:10:11.635 10.281 - 10.333: 84.8472% ( 27) 00:10:11.635 10.333 - 10.384: 85.0684% ( 16) 00:10:11.635 10.384 - 10.435: 85.2343% ( 12) 00:10:11.635 10.435 - 10.487: 85.4279% ( 14) 00:10:11.635 10.487 - 10.538: 85.5938% ( 12) 00:10:11.635 10.538 - 10.590: 85.7459% ( 11) 00:10:11.635 10.590 - 10.641: 85.8841% ( 10) 00:10:11.635 10.641 - 10.692: 86.1054% ( 16) 00:10:11.635 10.692 - 10.744: 86.2713% ( 12) 00:10:11.635 10.744 - 10.795: 86.4510% ( 13) 00:10:11.635 10.795 - 10.847: 86.6722% ( 16) 00:10:11.635 10.847 - 10.898: 86.8381% ( 12) 00:10:11.635 10.898 - 10.949: 87.1561% ( 23) 00:10:11.635 10.949 - 11.001: 87.3496% ( 14) 00:10:11.635 11.001 - 11.052: 87.4879% ( 10) 00:10:11.635 11.052 - 11.104: 87.6400% ( 11) 00:10:11.635 11.104 - 11.155: 87.8059% ( 12) 00:10:11.635 11.155 - 11.206: 87.9027% ( 7) 00:10:11.635 11.206 - 11.258: 88.0547% ( 11) 00:10:11.635 11.258 - 11.309: 88.1792% ( 9) 00:10:11.635 11.309 - 11.361: 88.2483% ( 5) 00:10:11.635 11.361 - 11.412: 88.3313% ( 6) 00:10:11.635 11.412 - 11.463: 88.4419% ( 8) 00:10:11.635 11.463 - 11.515: 88.5248% ( 6) 00:10:11.635 11.515 - 11.566: 88.6354% ( 8) 00:10:11.635 11.566 - 11.618: 88.6907% ( 4) 00:10:11.635 11.618 - 11.669: 88.8152% ( 9) 00:10:11.635 11.669 - 11.720: 88.9119% ( 7) 00:10:11.635 11.720 - 11.772: 88.9672% ( 4) 00:10:11.635 11.772 - 11.823: 89.1055% ( 10) 00:10:11.635 11.823 - 11.875: 89.1608% ( 4) 00:10:11.635 11.875 - 11.926: 89.2161% ( 4) 00:10:11.635 11.926 - 11.978: 89.2990% ( 6) 00:10:11.635 11.978 - 12.029: 89.3405% ( 3) 00:10:11.635 12.029 - 12.080: 89.4511% ( 8) 00:10:11.635 12.080 - 12.132: 89.5064% ( 4) 00:10:11.635 12.132 - 12.183: 89.6309% ( 9) 00:10:11.635 12.183 - 12.235: 89.6862% ( 4) 00:10:11.635 12.235 - 12.286: 89.7691% ( 6) 00:10:11.635 12.286 - 12.337: 89.8382% ( 5) 00:10:11.635 12.337 - 12.389: 89.8797% ( 3) 00:10:11.635 12.389 - 12.440: 89.9903% ( 8) 00:10:11.635 12.440 - 12.492: 90.0733% ( 6) 00:10:11.635 12.492 - 12.543: 90.1424% ( 5) 00:10:11.635 12.543 - 12.594: 90.3083% ( 12) 00:10:11.635 12.594 - 12.646: 90.4051% ( 7) 00:10:11.635 12.646 - 12.697: 90.4742% ( 5) 00:10:11.635 12.697 - 12.749: 90.5157% ( 3) 00:10:11.635 12.749 - 12.800: 90.6125% ( 7) 00:10:11.635 12.800 - 12.851: 90.7231% ( 8) 00:10:11.635 12.851 - 12.903: 90.8613% ( 10) 00:10:11.635 12.903 - 12.954: 90.9166% ( 4) 00:10:11.635 12.954 - 13.006: 90.9996% ( 6) 00:10:11.635 13.006 - 13.057: 91.0411% ( 3) 00:10:11.635 13.057 - 13.108: 91.1517% ( 8) 00:10:11.635 13.108 - 13.160: 91.2208% ( 5) 00:10:11.635 13.160 - 13.263: 91.3867% ( 12) 00:10:11.635 13.263 - 13.365: 91.5388% ( 11) 00:10:11.635 13.365 - 13.468: 91.7185% ( 13) 00:10:11.635 13.468 - 13.571: 91.9535% ( 17) 00:10:11.635 13.571 - 13.674: 92.2024% ( 18) 00:10:11.635 13.674 - 13.777: 92.3960% ( 14) 00:10:11.635 13.777 - 13.880: 92.5619% ( 12) 00:10:11.635 13.880 - 13.982: 92.7693% ( 15) 00:10:11.635 13.982 - 14.085: 92.9490% ( 13) 00:10:11.635 14.085 - 14.188: 93.0596% ( 8) 00:10:11.635 14.188 - 14.291: 93.1840% ( 9) 00:10:11.635 14.291 - 14.394: 93.3084% ( 9) 00:10:11.635 14.394 - 14.496: 93.4882% ( 13) 00:10:11.635 14.496 - 14.599: 93.6541% ( 12) 00:10:11.635 14.599 - 14.702: 93.7647% ( 8) 00:10:11.635 14.702 - 14.805: 93.8476% ( 6) 00:10:11.635 14.805 - 14.908: 93.9859% ( 10) 00:10:11.635 14.908 - 15.010: 94.1103% ( 9) 00:10:11.635 15.010 - 15.113: 94.2071% ( 7) 00:10:11.635 15.113 - 15.216: 94.2624% ( 4) 00:10:11.635 15.216 - 15.319: 94.4007% ( 10) 00:10:11.635 15.319 - 15.422: 94.4560% ( 4) 00:10:11.635 15.422 - 15.524: 94.4698% ( 1) 00:10:11.635 15.524 - 15.627: 94.5666% ( 7) 00:10:11.635 15.627 - 15.730: 94.6357% ( 5) 00:10:11.635 15.730 - 15.833: 94.7048% ( 5) 00:10:11.635 15.833 - 15.936: 94.8154% ( 8) 00:10:11.635 15.936 - 16.039: 94.9260% ( 8) 00:10:11.635 16.039 - 16.141: 95.0919% ( 12) 00:10:11.635 16.141 - 16.244: 95.1887% ( 7) 00:10:11.635 16.244 - 16.347: 95.2993% ( 8) 00:10:11.635 16.347 - 16.450: 95.4376% ( 10) 00:10:11.635 16.450 - 16.553: 95.5205% ( 6) 00:10:11.635 16.553 - 16.655: 95.7279% ( 15) 00:10:11.635 16.655 - 16.758: 95.8109% ( 6) 00:10:11.635 16.758 - 16.861: 95.8662% ( 4) 00:10:11.635 16.861 - 16.964: 96.0321% ( 12) 00:10:11.635 16.964 - 17.067: 96.1289% ( 7) 00:10:11.635 17.067 - 17.169: 96.2948% ( 12) 00:10:11.635 17.169 - 17.272: 96.4192% ( 9) 00:10:11.635 17.272 - 17.375: 96.5160% ( 7) 00:10:11.635 17.375 - 17.478: 96.6266% ( 8) 00:10:11.635 17.478 - 17.581: 96.6957% ( 5) 00:10:11.635 17.581 - 17.684: 96.8063% ( 8) 00:10:11.635 17.684 - 17.786: 96.8893% ( 6) 00:10:11.635 17.786 - 17.889: 96.9722% ( 6) 00:10:11.635 17.889 - 17.992: 97.0137% ( 3) 00:10:11.635 17.992 - 18.095: 97.0690% ( 4) 00:10:11.635 18.095 - 18.198: 97.1381% ( 5) 00:10:11.635 18.198 - 18.300: 97.1658% ( 2) 00:10:11.635 18.300 - 18.403: 97.2211% ( 4) 00:10:11.635 18.403 - 18.506: 97.3040% ( 6) 00:10:11.635 18.506 - 18.609: 97.3593% ( 4) 00:10:11.635 18.609 - 18.712: 97.4146% ( 4) 00:10:11.635 18.712 - 18.814: 97.4699% ( 4) 00:10:11.635 18.814 - 18.917: 97.5114% ( 3) 00:10:11.635 18.917 - 19.020: 97.5667% ( 4) 00:10:11.635 19.020 - 19.123: 97.5805% ( 1) 00:10:11.635 19.123 - 19.226: 97.6082% ( 2) 00:10:11.635 19.226 - 19.329: 97.6220% ( 1) 00:10:11.635 19.329 - 19.431: 97.6497% ( 2) 00:10:11.635 19.431 - 19.534: 97.6773% ( 2) 00:10:11.635 19.534 - 19.637: 97.7326% ( 4) 00:10:11.635 19.637 - 19.740: 97.7603% ( 2) 00:10:11.635 19.740 - 19.843: 97.8017% ( 3) 00:10:11.635 19.843 - 19.945: 97.8156% ( 1) 00:10:11.635 19.945 - 20.048: 97.8570% ( 3) 00:10:11.635 20.151 - 20.254: 97.8847% ( 2) 00:10:11.635 20.254 - 20.357: 97.8985% ( 1) 00:10:11.635 20.357 - 20.459: 97.9400% ( 3) 00:10:11.635 20.459 - 20.562: 98.0230% ( 6) 00:10:11.635 20.562 - 20.665: 98.0506% ( 2) 00:10:11.635 20.665 - 20.768: 98.0921% ( 3) 00:10:11.635 20.768 - 20.871: 98.1474% ( 4) 00:10:11.635 21.076 - 21.179: 98.1889% ( 3) 00:10:11.635 21.179 - 21.282: 98.2303% ( 3) 00:10:11.635 21.385 - 21.488: 98.2580% ( 2) 00:10:11.635 21.488 - 21.590: 98.2718% ( 1) 00:10:11.635 21.590 - 21.693: 98.2856% ( 1) 00:10:11.635 21.693 - 21.796: 98.2995% ( 1) 00:10:11.635 21.796 - 21.899: 98.3409% ( 3) 00:10:11.635 21.899 - 22.002: 98.3548% ( 1) 00:10:11.635 22.104 - 22.207: 98.3824% ( 2) 00:10:11.635 22.207 - 22.310: 98.4239% ( 3) 00:10:11.635 22.310 - 22.413: 98.4654% ( 3) 00:10:11.635 22.413 - 22.516: 98.4930% ( 2) 00:10:11.635 22.516 - 22.618: 98.5207% ( 2) 00:10:11.635 22.618 - 22.721: 98.5483% ( 2) 00:10:11.635 22.721 - 22.824: 98.5760% ( 2) 00:10:11.636 23.133 - 23.235: 98.6036% ( 2) 00:10:11.636 23.235 - 23.338: 98.6174% ( 1) 00:10:11.636 23.338 - 23.441: 98.6313% ( 1) 00:10:11.636 23.441 - 23.544: 98.6589% ( 2) 00:10:11.636 23.544 - 23.647: 98.6727% ( 1) 00:10:11.636 23.852 - 23.955: 98.6866% ( 1) 00:10:11.636 23.955 - 24.058: 98.7004% ( 1) 00:10:11.636 24.161 - 24.263: 98.7142% ( 1) 00:10:11.636 24.263 - 24.366: 98.7281% ( 1) 00:10:11.636 24.366 - 24.469: 98.7419% ( 1) 00:10:11.636 24.469 - 24.572: 98.7695% ( 2) 00:10:11.636 24.572 - 24.675: 98.8110% ( 3) 00:10:11.636 24.778 - 24.880: 98.8387% ( 2) 00:10:11.636 24.983 - 25.086: 98.8663% ( 2) 00:10:11.636 25.189 - 25.292: 98.8940% ( 2) 00:10:11.636 25.292 - 25.394: 98.9493% ( 4) 00:10:11.636 25.394 - 25.497: 98.9769% ( 2) 00:10:11.636 25.497 - 25.600: 99.0046% ( 2) 00:10:11.636 25.600 - 25.703: 99.0184% ( 1) 00:10:11.636 25.703 - 25.806: 99.0322% ( 1) 00:10:11.636 25.806 - 25.908: 99.0599% ( 2) 00:10:11.636 26.011 - 26.114: 99.0875% ( 2) 00:10:11.636 26.217 - 26.320: 99.1013% ( 1) 00:10:11.636 26.320 - 26.525: 99.1290% ( 2) 00:10:11.636 26.525 - 26.731: 99.1705% ( 3) 00:10:11.636 26.731 - 26.937: 99.1843% ( 1) 00:10:11.636 26.937 - 27.142: 99.2119% ( 2) 00:10:11.636 27.142 - 27.348: 99.2258% ( 1) 00:10:11.636 27.348 - 27.553: 99.2949% ( 5) 00:10:11.636 27.553 - 27.759: 99.3087% ( 1) 00:10:11.636 27.759 - 27.965: 99.3364% ( 2) 00:10:11.636 27.965 - 28.170: 99.3502% ( 1) 00:10:11.636 28.170 - 28.376: 99.3779% ( 2) 00:10:11.636 28.582 - 28.787: 99.3917% ( 1) 00:10:11.636 29.198 - 29.404: 99.4193% ( 2) 00:10:11.636 29.404 - 29.610: 99.4885% ( 5) 00:10:11.636 29.610 - 29.815: 99.5023% ( 1) 00:10:11.636 30.021 - 30.227: 99.5299% ( 2) 00:10:11.636 30.227 - 30.432: 99.5576% ( 2) 00:10:11.636 30.432 - 30.638: 99.5991% ( 3) 00:10:11.636 30.843 - 31.049: 99.6129% ( 1) 00:10:11.636 31.255 - 31.460: 99.6405% ( 2) 00:10:11.636 31.871 - 32.077: 99.6544% ( 1) 00:10:11.636 32.283 - 32.488: 99.6682% ( 1) 00:10:11.636 32.694 - 32.900: 99.6820% ( 1) 00:10:11.636 32.900 - 33.105: 99.6958% ( 1) 00:10:11.636 33.105 - 33.311: 99.7097% ( 1) 00:10:11.636 33.311 - 33.516: 99.7373% ( 2) 00:10:11.636 33.722 - 33.928: 99.7511% ( 1) 00:10:11.636 33.928 - 34.133: 99.7650% ( 1) 00:10:11.636 34.133 - 34.339: 99.7788% ( 1) 00:10:11.636 34.545 - 34.750: 99.7926% ( 1) 00:10:11.636 34.956 - 35.161: 99.8064% ( 1) 00:10:11.636 35.161 - 35.367: 99.8203% ( 1) 00:10:11.636 37.012 - 37.218: 99.8341% ( 1) 00:10:11.636 37.423 - 37.629: 99.8479% ( 1) 00:10:11.636 41.124 - 41.330: 99.8617% ( 1) 00:10:11.636 45.443 - 45.648: 99.8756% ( 1) 00:10:11.636 46.265 - 46.471: 99.8894% ( 1) 00:10:11.636 50.994 - 51.200: 99.9032% ( 1) 00:10:11.636 54.284 - 54.696: 99.9170% ( 1) 00:10:11.636 57.574 - 57.986: 99.9309% ( 1) 00:10:11.636 68.267 - 68.678: 99.9447% ( 1) 00:10:11.636 70.323 - 70.734: 99.9585% ( 1) 00:10:11.636 83.483 - 83.894: 99.9723% ( 1) 00:10:11.636 88.418 - 88.829: 99.9862% ( 1) 00:10:11.636 147.226 - 148.048: 100.0000% ( 1) 00:10:11.636 00:10:11.636 ************************************ 00:10:11.636 END TEST nvme_overhead 00:10:11.636 ************************************ 00:10:11.636 00:10:11.636 real 0m1.348s 00:10:11.636 user 0m1.100s 00:10:11.636 sys 0m0.194s 00:10:11.636 11:13:48 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.636 11:13:48 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:11.636 11:13:48 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:11.636 11:13:48 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:11.636 11:13:48 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.636 11:13:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.636 ************************************ 00:10:11.636 START TEST nvme_arbitration 00:10:11.636 ************************************ 00:10:11.636 11:13:48 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:15.826 Initializing NVMe Controllers 00:10:15.826 Attached to 0000:00:10.0 00:10:15.826 Attached to 0000:00:11.0 00:10:15.826 Attached to 0000:00:13.0 00:10:15.826 Attached to 0000:00:12.0 00:10:15.826 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:15.826 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:15.826 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:15.826 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:15.826 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:15.826 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:15.826 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:15.826 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:15.826 Initialization complete. Launching workers. 00:10:15.826 Starting thread on core 1 with urgent priority queue 00:10:15.826 Starting thread on core 2 with urgent priority queue 00:10:15.826 Starting thread on core 3 with urgent priority queue 00:10:15.826 Starting thread on core 0 with urgent priority queue 00:10:15.826 QEMU NVMe Ctrl (12340 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:10:15.826 QEMU NVMe Ctrl (12342 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:10:15.826 QEMU NVMe Ctrl (12341 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:10:15.826 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:10:15.826 QEMU NVMe Ctrl (12343 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:10:15.826 QEMU NVMe Ctrl (12342 ) core 3: 426.67 IO/s 234.38 secs/100000 ios 00:10:15.826 ======================================================== 00:10:15.826 00:10:15.826 ************************************ 00:10:15.826 END TEST nvme_arbitration 00:10:15.826 ************************************ 00:10:15.826 00:10:15.826 real 0m3.481s 00:10:15.826 user 0m9.432s 00:10:15.826 sys 0m0.194s 00:10:15.826 11:13:52 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.826 11:13:52 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:15.826 11:13:52 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:15.826 11:13:52 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:15.826 11:13:52 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.826 11:13:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.826 ************************************ 00:10:15.826 START TEST nvme_single_aen 00:10:15.826 ************************************ 00:10:15.826 11:13:52 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:15.826 Asynchronous Event Request test 00:10:15.826 Attached to 0000:00:10.0 00:10:15.826 Attached to 0000:00:11.0 00:10:15.826 Attached to 0000:00:13.0 00:10:15.826 Attached to 0000:00:12.0 00:10:15.826 Reset controller to setup AER completions for this process 00:10:15.826 Registering asynchronous event callbacks... 00:10:15.826 Getting orig temperature thresholds of all controllers 00:10:15.826 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.826 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.826 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.826 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.826 Setting all controllers temperature threshold low to trigger AER 00:10:15.826 Waiting for all controllers temperature threshold to be set lower 00:10:15.826 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.826 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:15.826 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.826 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:15.826 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.826 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:15.826 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.826 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:15.826 Waiting for all controllers to trigger AER and reset threshold 00:10:15.826 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.826 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.826 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.826 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.826 Cleaning up... 00:10:15.826 ************************************ 00:10:15.826 END TEST nvme_single_aen 00:10:15.826 ************************************ 00:10:15.826 00:10:15.826 real 0m0.332s 00:10:15.826 user 0m0.116s 00:10:15.826 sys 0m0.165s 00:10:15.826 11:13:52 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.826 11:13:52 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:15.826 11:13:52 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:15.826 11:13:52 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:15.826 11:13:52 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.826 11:13:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.826 ************************************ 00:10:15.826 START TEST nvme_doorbell_aers 00:10:15.826 ************************************ 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:15.827 11:13:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:15.827 11:13:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:15.827 11:13:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:15.827 11:13:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:15.827 11:13:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:16.085 [2024-11-15 11:13:53.385608] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:26.077 Executing: test_write_invalid_db 00:10:26.077 Waiting for AER completion... 00:10:26.077 Failure: test_write_invalid_db 00:10:26.077 00:10:26.077 Executing: test_invalid_db_write_overflow_sq 00:10:26.077 Waiting for AER completion... 00:10:26.077 Failure: test_invalid_db_write_overflow_sq 00:10:26.077 00:10:26.077 Executing: test_invalid_db_write_overflow_cq 00:10:26.077 Waiting for AER completion... 00:10:26.077 Failure: test_invalid_db_write_overflow_cq 00:10:26.077 00:10:26.077 11:14:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:26.077 11:14:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:26.077 [2024-11-15 11:14:03.460438] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:36.100 Executing: test_write_invalid_db 00:10:36.100 Waiting for AER completion... 00:10:36.100 Failure: test_write_invalid_db 00:10:36.100 00:10:36.100 Executing: test_invalid_db_write_overflow_sq 00:10:36.100 Waiting for AER completion... 00:10:36.100 Failure: test_invalid_db_write_overflow_sq 00:10:36.100 00:10:36.100 Executing: test_invalid_db_write_overflow_cq 00:10:36.100 Waiting for AER completion... 00:10:36.100 Failure: test_invalid_db_write_overflow_cq 00:10:36.100 00:10:36.100 11:14:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:36.100 11:14:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:36.100 [2024-11-15 11:14:13.476452] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:46.168 Executing: test_write_invalid_db 00:10:46.168 Waiting for AER completion... 00:10:46.168 Failure: test_write_invalid_db 00:10:46.168 00:10:46.168 Executing: test_invalid_db_write_overflow_sq 00:10:46.168 Waiting for AER completion... 00:10:46.168 Failure: test_invalid_db_write_overflow_sq 00:10:46.168 00:10:46.168 Executing: test_invalid_db_write_overflow_cq 00:10:46.168 Waiting for AER completion... 00:10:46.168 Failure: test_invalid_db_write_overflow_cq 00:10:46.168 00:10:46.168 11:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:46.168 11:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:46.427 [2024-11-15 11:14:23.599428] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 Executing: test_write_invalid_db 00:10:56.398 Waiting for AER completion... 00:10:56.398 Failure: test_write_invalid_db 00:10:56.398 00:10:56.398 Executing: test_invalid_db_write_overflow_sq 00:10:56.398 Waiting for AER completion... 00:10:56.398 Failure: test_invalid_db_write_overflow_sq 00:10:56.398 00:10:56.398 Executing: test_invalid_db_write_overflow_cq 00:10:56.398 Waiting for AER completion... 00:10:56.398 Failure: test_invalid_db_write_overflow_cq 00:10:56.398 00:10:56.398 ************************************ 00:10:56.398 END TEST nvme_doorbell_aers 00:10:56.398 ************************************ 00:10:56.398 00:10:56.398 real 0m40.340s 00:10:56.398 user 0m28.518s 00:10:56.398 sys 0m11.406s 00:10:56.398 11:14:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.398 11:14:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:56.398 11:14:33 nvme -- nvme/nvme.sh@97 -- # uname 00:10:56.398 11:14:33 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:56.398 11:14:33 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:56.398 11:14:33 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:56.398 11:14:33 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.398 11:14:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.398 ************************************ 00:10:56.398 START TEST nvme_multi_aen 00:10:56.398 ************************************ 00:10:56.398 11:14:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:56.398 [2024-11-15 11:14:33.621477] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.621573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.621590] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.623281] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.623327] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.623343] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.625046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.625084] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.625101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.626527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.626571] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 [2024-11-15 11:14:33.626586] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64632) is not found. Dropping the request. 00:10:56.398 Child process pid: 65142 00:10:56.657 [Child] Asynchronous Event Request test 00:10:56.657 [Child] Attached to 0000:00:10.0 00:10:56.657 [Child] Attached to 0000:00:11.0 00:10:56.657 [Child] Attached to 0000:00:13.0 00:10:56.657 [Child] Attached to 0000:00:12.0 00:10:56.657 [Child] Registering asynchronous event callbacks... 00:10:56.657 [Child] Getting orig temperature thresholds of all controllers 00:10:56.657 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:56.657 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 [Child] Cleaning up... 00:10:56.657 Asynchronous Event Request test 00:10:56.657 Attached to 0000:00:10.0 00:10:56.657 Attached to 0000:00:11.0 00:10:56.657 Attached to 0000:00:13.0 00:10:56.657 Attached to 0000:00:12.0 00:10:56.657 Reset controller to setup AER completions for this process 00:10:56.657 Registering asynchronous event callbacks... 00:10:56.657 Getting orig temperature thresholds of all controllers 00:10:56.657 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.657 Setting all controllers temperature threshold low to trigger AER 00:10:56.657 Waiting for all controllers temperature threshold to be set lower 00:10:56.657 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:56.657 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:56.657 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:56.657 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.657 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:56.657 Waiting for all controllers to trigger AER and reset threshold 00:10:56.657 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.657 Cleaning up... 00:10:56.657 ************************************ 00:10:56.657 END TEST nvme_multi_aen 00:10:56.657 ************************************ 00:10:56.657 00:10:56.657 real 0m0.595s 00:10:56.657 user 0m0.195s 00:10:56.657 sys 0m0.294s 00:10:56.657 11:14:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.657 11:14:33 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:56.657 11:14:34 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:56.657 11:14:34 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:56.657 11:14:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.657 11:14:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.917 ************************************ 00:10:56.917 START TEST nvme_startup 00:10:56.917 ************************************ 00:10:56.917 11:14:34 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:57.176 Initializing NVMe Controllers 00:10:57.176 Attached to 0000:00:10.0 00:10:57.176 Attached to 0000:00:11.0 00:10:57.176 Attached to 0000:00:13.0 00:10:57.176 Attached to 0000:00:12.0 00:10:57.176 Initialization complete. 00:10:57.176 Time used:180317.578 (us). 00:10:57.176 ************************************ 00:10:57.176 END TEST nvme_startup 00:10:57.176 ************************************ 00:10:57.176 00:10:57.176 real 0m0.276s 00:10:57.176 user 0m0.097s 00:10:57.176 sys 0m0.136s 00:10:57.176 11:14:34 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.176 11:14:34 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:57.176 11:14:34 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:57.176 11:14:34 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:57.176 11:14:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.176 11:14:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:57.176 ************************************ 00:10:57.176 START TEST nvme_multi_secondary 00:10:57.176 ************************************ 00:10:57.176 11:14:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:10:57.176 11:14:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65198 00:10:57.176 11:14:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:57.176 11:14:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65199 00:10:57.176 11:14:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:57.176 11:14:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:00.462 Initializing NVMe Controllers 00:11:00.462 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:00.462 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:00.462 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:00.462 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:00.462 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:00.462 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:00.462 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:00.462 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:00.462 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:00.462 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:00.462 Initialization complete. Launching workers. 00:11:00.462 ======================================================== 00:11:00.462 Latency(us) 00:11:00.462 Device Information : IOPS MiB/s Average min max 00:11:00.462 PCIE (0000:00:10.0) NSID 1 from core 1: 4485.33 17.52 3564.58 1809.49 9838.79 00:11:00.462 PCIE (0000:00:11.0) NSID 1 from core 1: 4485.33 17.52 3566.76 1648.49 10115.54 00:11:00.462 PCIE (0000:00:13.0) NSID 1 from core 1: 4485.33 17.52 3566.93 1557.81 11206.66 00:11:00.462 PCIE (0000:00:12.0) NSID 1 from core 1: 4485.33 17.52 3567.12 1744.78 11561.49 00:11:00.462 PCIE (0000:00:12.0) NSID 2 from core 1: 4485.33 17.52 3567.29 1720.58 9436.17 00:11:00.462 PCIE (0000:00:12.0) NSID 3 from core 1: 4485.33 17.52 3567.63 1705.09 9818.84 00:11:00.462 ======================================================== 00:11:00.462 Total : 26911.96 105.12 3566.72 1557.81 11561.49 00:11:00.462 00:11:00.720 Initializing NVMe Controllers 00:11:00.720 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:00.720 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:00.720 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:00.720 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:00.720 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:00.720 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:00.720 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:00.720 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:00.720 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:00.720 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:00.720 Initialization complete. Launching workers. 00:11:00.720 ======================================================== 00:11:00.720 Latency(us) 00:11:00.720 Device Information : IOPS MiB/s Average min max 00:11:00.720 PCIE (0000:00:10.0) NSID 1 from core 2: 3290.56 12.85 4860.73 1312.94 14532.50 00:11:00.720 PCIE (0000:00:11.0) NSID 1 from core 2: 3290.56 12.85 4862.02 1194.73 13619.20 00:11:00.720 PCIE (0000:00:13.0) NSID 1 from core 2: 3290.56 12.85 4861.68 1262.58 14548.00 00:11:00.720 PCIE (0000:00:12.0) NSID 1 from core 2: 3290.56 12.85 4861.56 1304.27 14381.36 00:11:00.720 PCIE (0000:00:12.0) NSID 2 from core 2: 3290.56 12.85 4861.90 1163.61 14579.90 00:11:00.720 PCIE (0000:00:12.0) NSID 3 from core 2: 3290.56 12.85 4861.86 1173.57 14418.76 00:11:00.720 ======================================================== 00:11:00.720 Total : 19743.37 77.12 4861.62 1163.61 14579.90 00:11:00.720 00:11:00.720 11:14:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65198 00:11:02.622 Initializing NVMe Controllers 00:11:02.622 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:02.622 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:02.622 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:02.622 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:02.622 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:02.622 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:02.622 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:02.622 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:02.622 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:02.622 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:02.622 Initialization complete. Launching workers. 00:11:02.622 ======================================================== 00:11:02.622 Latency(us) 00:11:02.622 Device Information : IOPS MiB/s Average min max 00:11:02.622 PCIE (0000:00:10.0) NSID 1 from core 0: 7820.43 30.55 2044.31 922.63 8850.76 00:11:02.622 PCIE (0000:00:11.0) NSID 1 from core 0: 7820.43 30.55 2045.44 945.12 8580.09 00:11:02.622 PCIE (0000:00:13.0) NSID 1 from core 0: 7820.43 30.55 2045.41 854.52 8572.91 00:11:02.622 PCIE (0000:00:12.0) NSID 1 from core 0: 7820.43 30.55 2045.37 820.34 8359.53 00:11:02.622 PCIE (0000:00:12.0) NSID 2 from core 0: 7820.43 30.55 2045.34 775.12 8740.31 00:11:02.622 PCIE (0000:00:12.0) NSID 3 from core 0: 7820.43 30.55 2045.32 768.64 8388.96 00:11:02.622 ======================================================== 00:11:02.622 Total : 46922.58 183.29 2045.20 768.64 8850.76 00:11:02.622 00:11:02.622 11:14:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65199 00:11:02.622 11:14:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65268 00:11:02.622 11:14:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:02.622 11:14:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65269 00:11:02.622 11:14:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:02.622 11:14:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:05.973 Initializing NVMe Controllers 00:11:05.973 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:05.973 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:05.973 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:05.973 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:05.973 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:05.973 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:05.973 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:05.973 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:05.973 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:05.973 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:05.973 Initialization complete. Launching workers. 00:11:05.973 ======================================================== 00:11:05.973 Latency(us) 00:11:05.973 Device Information : IOPS MiB/s Average min max 00:11:05.973 PCIE (0000:00:10.0) NSID 1 from core 1: 4580.95 17.89 3490.16 1032.54 10534.18 00:11:05.973 PCIE (0000:00:11.0) NSID 1 from core 1: 4580.95 17.89 3492.38 1051.69 10806.18 00:11:05.973 PCIE (0000:00:13.0) NSID 1 from core 1: 4580.95 17.89 3492.54 1051.15 9888.61 00:11:05.973 PCIE (0000:00:12.0) NSID 1 from core 1: 4580.95 17.89 3492.71 1049.19 9103.08 00:11:05.973 PCIE (0000:00:12.0) NSID 2 from core 1: 4580.95 17.89 3493.03 1054.28 8618.62 00:11:05.973 PCIE (0000:00:12.0) NSID 3 from core 1: 4580.95 17.89 3493.18 1058.77 10014.17 00:11:05.973 ======================================================== 00:11:05.973 Total : 27485.68 107.37 3492.33 1032.54 10806.18 00:11:05.973 00:11:05.973 Initializing NVMe Controllers 00:11:05.973 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:05.973 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:05.973 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:05.973 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:05.973 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:05.973 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:05.973 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:05.973 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:05.973 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:05.973 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:05.973 Initialization complete. Launching workers. 00:11:05.973 ======================================================== 00:11:05.973 Latency(us) 00:11:05.973 Device Information : IOPS MiB/s Average min max 00:11:05.973 PCIE (0000:00:10.0) NSID 1 from core 0: 4854.92 18.96 3293.05 1009.88 9924.31 00:11:05.973 PCIE (0000:00:11.0) NSID 1 from core 0: 4854.92 18.96 3294.55 1047.42 9458.53 00:11:05.973 PCIE (0000:00:13.0) NSID 1 from core 0: 4854.92 18.96 3294.32 1038.04 9818.65 00:11:05.973 PCIE (0000:00:12.0) NSID 1 from core 0: 4854.92 18.96 3294.09 1058.30 9942.50 00:11:05.973 PCIE (0000:00:12.0) NSID 2 from core 0: 4854.92 18.96 3293.92 1034.51 10492.97 00:11:05.973 PCIE (0000:00:12.0) NSID 3 from core 0: 4854.92 18.96 3293.83 1013.66 10947.64 00:11:05.973 ======================================================== 00:11:05.973 Total : 29129.52 113.79 3293.96 1009.88 10947.64 00:11:05.973 00:11:07.874 Initializing NVMe Controllers 00:11:07.874 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:07.874 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:07.874 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:07.874 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:07.874 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:07.874 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:07.874 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:07.874 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:07.874 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:07.874 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:07.874 Initialization complete. Launching workers. 00:11:07.874 ======================================================== 00:11:07.874 Latency(us) 00:11:07.874 Device Information : IOPS MiB/s Average min max 00:11:07.874 PCIE (0000:00:10.0) NSID 1 from core 2: 3160.99 12.35 5060.26 1019.11 16128.19 00:11:07.874 PCIE (0000:00:11.0) NSID 1 from core 2: 3160.99 12.35 5060.84 1057.42 13071.10 00:11:07.874 PCIE (0000:00:13.0) NSID 1 from core 2: 3160.99 12.35 5056.93 1070.17 13282.21 00:11:07.874 PCIE (0000:00:12.0) NSID 1 from core 2: 3160.99 12.35 5057.35 1101.15 13468.74 00:11:07.874 PCIE (0000:00:12.0) NSID 2 from core 2: 3160.99 12.35 5057.30 1076.22 14143.73 00:11:07.874 PCIE (0000:00:12.0) NSID 3 from core 2: 3164.19 12.36 5051.62 1056.64 15031.85 00:11:07.874 ======================================================== 00:11:07.874 Total : 18969.15 74.10 5057.38 1019.11 16128.19 00:11:07.874 00:11:07.874 ************************************ 00:11:07.874 END TEST nvme_multi_secondary 00:11:07.874 ************************************ 00:11:07.874 11:14:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65268 00:11:07.874 11:14:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65269 00:11:07.874 00:11:07.874 real 0m10.728s 00:11:07.874 user 0m18.546s 00:11:07.874 sys 0m1.180s 00:11:07.874 11:14:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.874 11:14:45 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:07.874 11:14:45 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:07.874 11:14:45 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:07.874 11:14:45 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64204 ]] 00:11:07.874 11:14:45 nvme -- common/autotest_common.sh@1092 -- # kill 64204 00:11:07.874 11:14:45 nvme -- common/autotest_common.sh@1093 -- # wait 64204 00:11:07.874 [2024-11-15 11:14:45.190124] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.190200] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.190238] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.190264] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.192464] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.192526] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.192547] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.192581] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.194860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.194919] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.194939] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.194964] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.197098] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.197162] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.197182] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:07.874 [2024-11-15 11:14:45.197205] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65141) is not found. Dropping the request. 00:11:08.134 11:14:45 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:11:08.134 11:14:45 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:11:08.134 11:14:45 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:08.134 11:14:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:08.134 11:14:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.134 11:14:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:08.134 ************************************ 00:11:08.134 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:08.134 ************************************ 00:11:08.134 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:08.394 * Looking for test storage... 00:11:08.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.394 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.394 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.394 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.394 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.394 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.394 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.394 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.395 --rc genhtml_branch_coverage=1 00:11:08.395 --rc genhtml_function_coverage=1 00:11:08.395 --rc genhtml_legend=1 00:11:08.395 --rc geninfo_all_blocks=1 00:11:08.395 --rc geninfo_unexecuted_blocks=1 00:11:08.395 00:11:08.395 ' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65430 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:08.395 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65430 00:11:08.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.396 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65430 ']' 00:11:08.396 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.396 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.396 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.396 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.396 11:14:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:08.655 [2024-11-15 11:14:45.880203] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:11:08.655 [2024-11-15 11:14:45.880547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65430 ] 00:11:08.914 [2024-11-15 11:14:46.069216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.914 [2024-11-15 11:14:46.220029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.914 [2024-11-15 11:14:46.220209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.914 [2024-11-15 11:14:46.220386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.914 [2024-11-15 11:14:46.220411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:09.851 nvme0n1 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_nqcQX.txt 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:09.851 true 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731669287 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65464 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:09.851 11:14:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:12.385 [2024-11-15 11:14:49.220133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:12.385 [2024-11-15 11:14:49.220463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:12.385 [2024-11-15 11:14:49.220495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:12.385 [2024-11-15 11:14:49.220512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.385 [2024-11-15 11:14:49.222653] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65464 00:11:12.385 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65464 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65464 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_nqcQX.txt 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:12.385 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_nqcQX.txt 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65430 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65430 ']' 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65430 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65430 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.386 killing process with pid 65430 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65430' 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65430 00:11:12.386 11:14:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65430 00:11:14.918 11:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:14.918 11:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:14.918 00:11:14.918 real 0m6.400s 00:11:14.918 user 0m22.295s 00:11:14.918 sys 0m0.822s 00:11:14.919 11:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.919 11:14:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:14.919 ************************************ 00:11:14.919 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:14.919 ************************************ 00:11:14.919 11:14:51 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:14.919 11:14:51 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:14.919 11:14:51 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:14.919 11:14:51 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.919 11:14:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.919 ************************************ 00:11:14.919 START TEST nvme_fio 00:11:14.919 ************************************ 00:11:14.919 11:14:51 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:11:14.919 11:14:51 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:14.919 11:14:51 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:14.919 11:14:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:14.919 11:14:51 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:14.919 11:14:51 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:11:14.919 11:14:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:14.919 11:14:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:14.919 11:14:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:14.919 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:14.919 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:14.919 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:14.919 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:14.919 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:14.919 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:14.919 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:15.178 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:15.178 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:15.437 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:15.437 11:14:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:15.437 11:14:52 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:15.695 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:15.695 fio-3.35 00:11:15.695 Starting 1 thread 00:11:18.984 00:11:18.984 test: (groupid=0, jobs=1): err= 0: pid=65609: Fri Nov 15 11:14:56 2024 00:11:18.984 read: IOPS=21.8k, BW=85.1MiB/s (89.3MB/s)(170MiB/2001msec) 00:11:18.984 slat (nsec): min=4040, max=84016, avg=4765.03, stdev=1094.22 00:11:18.984 clat (usec): min=199, max=10507, avg=2929.98, stdev=252.28 00:11:18.984 lat (usec): min=204, max=10559, avg=2934.75, stdev=252.65 00:11:18.984 clat percentiles (usec): 00:11:18.984 | 1.00th=[ 2704], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:11:18.984 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:18.984 | 70.00th=[ 2966], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3097], 00:11:18.984 | 99.00th=[ 3621], 99.50th=[ 4424], 99.90th=[ 5800], 99.95th=[ 8029], 00:11:18.984 | 99.99th=[10159] 00:11:18.984 bw ( KiB/s): min=83632, max=88272, per=99.15%, avg=86440.00, stdev=2469.18, samples=3 00:11:18.984 iops : min=20908, max=22068, avg=21610.00, stdev=617.29, samples=3 00:11:18.984 write: IOPS=21.6k, BW=84.5MiB/s (88.7MB/s)(169MiB/2001msec); 0 zone resets 00:11:18.984 slat (nsec): min=4149, max=83233, avg=4941.39, stdev=1070.19 00:11:18.984 clat (usec): min=323, max=10359, avg=2935.54, stdev=257.43 00:11:18.984 lat (usec): min=329, max=10380, avg=2940.48, stdev=257.82 00:11:18.984 clat percentiles (usec): 00:11:18.984 | 1.00th=[ 2704], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:11:18.984 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:18.984 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3032], 95.00th=[ 3097], 00:11:18.984 | 99.00th=[ 3621], 99.50th=[ 4424], 99.90th=[ 6325], 99.95th=[ 8225], 00:11:18.984 | 99.99th=[ 9896] 00:11:18.984 bw ( KiB/s): min=83632, max=88280, per=100.00%, avg=86629.33, stdev=2600.21, samples=3 00:11:18.984 iops : min=20908, max=22070, avg=21657.33, stdev=650.05, samples=3 00:11:18.984 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:18.984 lat (msec) : 2=0.06%, 4=99.26%, 10=0.64%, 20=0.01% 00:11:18.984 cpu : usr=99.25%, sys=0.10%, ctx=16, majf=0, minf=607 00:11:18.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:18.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.984 issued rwts: total=43614,43311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.984 00:11:18.984 Run status group 0 (all jobs): 00:11:18.984 READ: bw=85.1MiB/s (89.3MB/s), 85.1MiB/s-85.1MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:11:18.984 WRITE: bw=84.5MiB/s (88.7MB/s), 84.5MiB/s-84.5MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:11:19.242 ----------------------------------------------------- 00:11:19.242 Suppressions used: 00:11:19.242 count bytes template 00:11:19.242 1 32 /usr/src/fio/parse.c 00:11:19.242 1 8 libtcmalloc_minimal.so 00:11:19.242 ----------------------------------------------------- 00:11:19.242 00:11:19.242 11:14:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:19.242 11:14:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:19.242 11:14:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:19.242 11:14:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:19.501 11:14:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:19.501 11:14:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:19.759 11:14:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:19.759 11:14:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:19.759 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:19.759 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:19.759 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:20.018 11:14:57 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:20.018 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:20.018 fio-3.35 00:11:20.018 Starting 1 thread 00:11:24.203 00:11:24.203 test: (groupid=0, jobs=1): err= 0: pid=65675: Fri Nov 15 11:15:00 2024 00:11:24.203 read: IOPS=21.4k, BW=83.5MiB/s (87.6MB/s)(167MiB/2001msec) 00:11:24.203 slat (usec): min=3, max=535, avg= 4.93, stdev= 2.83 00:11:24.203 clat (usec): min=208, max=13191, avg=2986.83, stdev=324.44 00:11:24.203 lat (usec): min=212, max=13274, avg=2991.77, stdev=324.84 00:11:24.203 clat percentiles (usec): 00:11:24.203 | 1.00th=[ 2769], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:11:24.203 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:11:24.203 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3064], 95.00th=[ 3130], 00:11:24.203 | 99.00th=[ 3720], 99.50th=[ 4359], 99.90th=[ 7701], 99.95th=[10814], 00:11:24.203 | 99.99th=[12911] 00:11:24.203 bw ( KiB/s): min=83480, max=85792, per=99.25%, avg=84906.67, stdev=1247.44, samples=3 00:11:24.203 iops : min=20870, max=21450, avg=21227.33, stdev=312.57, samples=3 00:11:24.203 write: IOPS=21.2k, BW=82.9MiB/s (86.9MB/s)(166MiB/2001msec); 0 zone resets 00:11:24.203 slat (usec): min=3, max=579, avg= 5.10, stdev= 2.99 00:11:24.203 clat (usec): min=320, max=13089, avg=2991.66, stdev=334.59 00:11:24.203 lat (usec): min=325, max=13112, avg=2996.76, stdev=334.98 00:11:24.203 clat percentiles (usec): 00:11:24.203 | 1.00th=[ 2769], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:11:24.203 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:11:24.203 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3064], 95.00th=[ 3130], 00:11:24.203 | 99.00th=[ 3720], 99.50th=[ 4424], 99.90th=[ 8848], 99.95th=[10945], 00:11:24.203 | 99.99th=[12649] 00:11:24.203 bw ( KiB/s): min=83440, max=85840, per=100.00%, avg=85016.00, stdev=1365.33, samples=3 00:11:24.203 iops : min=20860, max=21460, avg=21254.00, stdev=341.33, samples=3 00:11:24.203 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:24.203 lat (msec) : 2=0.13%, 4=99.04%, 10=0.72%, 20=0.07% 00:11:24.203 cpu : usr=99.35%, sys=0.00%, ctx=6, majf=0, minf=607 00:11:24.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:24.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.203 issued rwts: total=42797,42475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.203 00:11:24.203 Run status group 0 (all jobs): 00:11:24.203 READ: bw=83.5MiB/s (87.6MB/s), 83.5MiB/s-83.5MiB/s (87.6MB/s-87.6MB/s), io=167MiB (175MB), run=2001-2001msec 00:11:24.203 WRITE: bw=82.9MiB/s (86.9MB/s), 82.9MiB/s-82.9MiB/s (86.9MB/s-86.9MB/s), io=166MiB (174MB), run=2001-2001msec 00:11:24.203 ----------------------------------------------------- 00:11:24.203 Suppressions used: 00:11:24.203 count bytes template 00:11:24.203 1 32 /usr/src/fio/parse.c 00:11:24.203 1 8 libtcmalloc_minimal.so 00:11:24.203 ----------------------------------------------------- 00:11:24.203 00:11:24.203 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:24.203 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:24.203 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:24.203 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:24.204 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:24.204 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:24.461 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:24.461 11:15:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:24.461 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:24.462 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:24.462 11:15:01 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:24.719 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:24.719 fio-3.35 00:11:24.719 Starting 1 thread 00:11:28.907 00:11:28.907 test: (groupid=0, jobs=1): err= 0: pid=65736: Fri Nov 15 11:15:05 2024 00:11:28.907 read: IOPS=21.4k, BW=83.7MiB/s (87.7MB/s)(167MiB/2001msec) 00:11:28.907 slat (nsec): min=3845, max=49267, avg=4869.65, stdev=1279.66 00:11:28.907 clat (usec): min=257, max=11382, avg=2984.16, stdev=519.16 00:11:28.907 lat (usec): min=262, max=11431, avg=2989.03, stdev=519.89 00:11:28.907 clat percentiles (usec): 00:11:28.907 | 1.00th=[ 2409], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:11:28.907 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:28.907 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3261], 00:11:28.907 | 99.00th=[ 5800], 99.50th=[ 7046], 99.90th=[ 8356], 99.95th=[ 8979], 00:11:28.907 | 99.99th=[11076] 00:11:28.907 bw ( KiB/s): min=84127, max=85016, per=98.71%, avg=84554.33, stdev=445.49, samples=3 00:11:28.907 iops : min=21031, max=21254, avg=21138.33, stdev=111.73, samples=3 00:11:28.907 write: IOPS=21.3k, BW=83.0MiB/s (87.0MB/s)(166MiB/2001msec); 0 zone resets 00:11:28.907 slat (usec): min=4, max=103, avg= 5.08, stdev= 1.40 00:11:28.907 clat (usec): min=208, max=11153, avg=2988.20, stdev=537.55 00:11:28.907 lat (usec): min=213, max=11174, avg=2993.28, stdev=538.28 00:11:28.907 clat percentiles (usec): 00:11:28.907 | 1.00th=[ 2343], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:11:28.907 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:28.907 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3261], 00:11:28.907 | 99.00th=[ 5932], 99.50th=[ 7308], 99.90th=[ 8356], 99.95th=[ 9241], 00:11:28.907 | 99.99th=[10814] 00:11:28.907 bw ( KiB/s): min=84023, max=85432, per=99.59%, avg=84661.00, stdev=713.85, samples=3 00:11:28.907 iops : min=21005, max=21358, avg=21165.00, stdev=178.80, samples=3 00:11:28.907 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:28.907 lat (msec) : 2=0.47%, 4=97.38%, 10=2.08%, 20=0.03% 00:11:28.907 cpu : usr=99.30%, sys=0.10%, ctx=3, majf=0, minf=607 00:11:28.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:28.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.907 issued rwts: total=42852,42525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.907 00:11:28.907 Run status group 0 (all jobs): 00:11:28.907 READ: bw=83.7MiB/s (87.7MB/s), 83.7MiB/s-83.7MiB/s (87.7MB/s-87.7MB/s), io=167MiB (176MB), run=2001-2001msec 00:11:28.907 WRITE: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:11:28.907 ----------------------------------------------------- 00:11:28.907 Suppressions used: 00:11:28.907 count bytes template 00:11:28.907 1 32 /usr/src/fio/parse.c 00:11:28.907 1 8 libtcmalloc_minimal.so 00:11:28.907 ----------------------------------------------------- 00:11:28.907 00:11:28.907 11:15:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:28.907 11:15:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:28.907 11:15:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:28.907 11:15:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:28.907 11:15:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:28.907 11:15:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:29.473 11:15:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:29.474 11:15:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:29.474 11:15:06 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.474 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:29.474 fio-3.35 00:11:29.474 Starting 1 thread 00:11:34.740 00:11:34.740 test: (groupid=0, jobs=1): err= 0: pid=65802: Fri Nov 15 11:15:11 2024 00:11:34.740 read: IOPS=21.0k, BW=82.1MiB/s (86.1MB/s)(164MiB/2001msec) 00:11:34.740 slat (nsec): min=4284, max=83371, avg=5217.04, stdev=1211.51 00:11:34.740 clat (usec): min=182, max=12368, avg=3029.66, stdev=456.88 00:11:34.740 lat (usec): min=187, max=12429, avg=3034.88, stdev=457.43 00:11:34.740 clat percentiles (usec): 00:11:34.740 | 1.00th=[ 2212], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2900], 00:11:34.740 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:11:34.740 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3130], 95.00th=[ 3195], 00:11:34.740 | 99.00th=[ 4948], 99.50th=[ 6325], 99.90th=[ 8848], 99.95th=[ 9503], 00:11:34.740 | 99.99th=[11994] 00:11:34.740 bw ( KiB/s): min=81952, max=85272, per=99.18%, avg=83398.00, stdev=1700.88, samples=3 00:11:34.740 iops : min=20488, max=21318, avg=20849.33, stdev=425.28, samples=3 00:11:34.740 write: IOPS=20.9k, BW=81.7MiB/s (85.6MB/s)(163MiB/2001msec); 0 zone resets 00:11:34.740 slat (nsec): min=4436, max=57640, avg=5675.44, stdev=1189.63 00:11:34.740 clat (usec): min=225, max=12138, avg=3038.41, stdev=466.66 00:11:34.740 lat (usec): min=231, max=12169, avg=3044.08, stdev=467.22 00:11:34.740 clat percentiles (usec): 00:11:34.740 | 1.00th=[ 2245], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:11:34.740 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:11:34.740 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3130], 95.00th=[ 3195], 00:11:34.740 | 99.00th=[ 5014], 99.50th=[ 6390], 99.90th=[ 8848], 99.95th=[ 9896], 00:11:34.740 | 99.99th=[11731] 00:11:34.740 bw ( KiB/s): min=82152, max=85232, per=99.79%, avg=83461.67, stdev=1590.84, samples=3 00:11:34.740 iops : min=20538, max=21308, avg=20865.33, stdev=397.75, samples=3 00:11:34.740 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:34.740 lat (msec) : 2=0.59%, 4=97.58%, 10=1.75%, 20=0.04% 00:11:34.740 cpu : usr=99.25%, sys=0.15%, ctx=4, majf=0, minf=605 00:11:34.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:34.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:34.740 issued rwts: total=42066,41839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:34.740 00:11:34.740 Run status group 0 (all jobs): 00:11:34.740 READ: bw=82.1MiB/s (86.1MB/s), 82.1MiB/s-82.1MiB/s (86.1MB/s-86.1MB/s), io=164MiB (172MB), run=2001-2001msec 00:11:34.740 WRITE: bw=81.7MiB/s (85.6MB/s), 81.7MiB/s-81.7MiB/s (85.6MB/s-85.6MB/s), io=163MiB (171MB), run=2001-2001msec 00:11:34.740 ----------------------------------------------------- 00:11:34.740 Suppressions used: 00:11:34.740 count bytes template 00:11:34.740 1 32 /usr/src/fio/parse.c 00:11:34.740 1 8 libtcmalloc_minimal.so 00:11:34.740 ----------------------------------------------------- 00:11:34.740 00:11:34.740 11:15:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:34.740 11:15:11 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:34.740 00:11:34.740 real 0m19.725s 00:11:34.740 user 0m14.868s 00:11:34.740 sys 0m5.318s 00:11:34.740 ************************************ 00:11:34.740 END TEST nvme_fio 00:11:34.740 ************************************ 00:11:34.740 11:15:11 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.740 11:15:11 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:34.740 ************************************ 00:11:34.740 END TEST nvme 00:11:34.740 ************************************ 00:11:34.740 00:11:34.740 real 1m35.109s 00:11:34.740 user 3m42.393s 00:11:34.740 sys 0m25.550s 00:11:34.740 11:15:11 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.740 11:15:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:34.740 11:15:11 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:34.740 11:15:11 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:34.740 11:15:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:34.740 11:15:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.740 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:11:34.740 ************************************ 00:11:34.740 START TEST nvme_scc 00:11:34.740 ************************************ 00:11:34.740 11:15:11 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:34.740 * Looking for test storage... 00:11:34.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:34.740 11:15:11 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:34.740 11:15:11 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:34.740 11:15:11 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:34.740 11:15:11 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:34.740 11:15:11 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.740 11:15:11 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.740 11:15:11 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.740 11:15:11 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.740 11:15:11 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:34.741 11:15:11 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.741 11:15:11 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 11:15:11 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 11:15:11 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 11:15:11 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 11:15:11 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.741 11:15:11 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.741 11:15:11 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.741 11:15:11 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.741 11:15:11 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.741 11:15:11 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:34.741 11:15:11 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:34.741 11:15:11 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:34.741 11:15:11 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.741 11:15:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:34.741 11:15:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:34.741 11:15:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:34.741 11:15:12 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:35.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:35.566 Waiting for block devices as requested 00:11:35.566 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.825 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.825 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.825 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:41.102 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:41.102 11:15:18 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:41.102 11:15:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:41.102 11:15:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:41.102 11:15:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:41.102 11:15:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.102 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:41.103 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:41.104 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:41.105 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:41.106 11:15:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:41.106 11:15:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:41.106 11:15:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:41.106 11:15:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.106 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.107 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.108 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.109 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.110 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.375 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:41.376 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:41.377 11:15:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:41.377 11:15:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:41.377 11:15:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:41.377 11:15:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.377 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.378 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.379 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.380 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.381 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:41.382 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:41.383 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.384 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:41.385 11:15:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:41.385 11:15:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:41.385 11:15:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:41.385 11:15:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.385 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.386 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.387 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:41.388 11:15:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:41.388 11:15:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:41.389 11:15:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:41.647 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:41.647 11:15:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:41.647 11:15:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:41.647 11:15:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:41.647 11:15:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:41.647 11:15:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:41.648 11:15:18 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:41.648 11:15:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:41.648 11:15:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:41.648 11:15:18 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:42.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:43.151 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.151 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.151 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.151 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.151 11:15:20 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:43.151 11:15:20 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:43.151 11:15:20 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.151 11:15:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:43.151 ************************************ 00:11:43.151 START TEST nvme_simple_copy 00:11:43.151 ************************************ 00:11:43.151 11:15:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:43.409 Initializing NVMe Controllers 00:11:43.409 Attaching to 0000:00:10.0 00:11:43.409 Controller supports SCC. Attached to 0000:00:10.0 00:11:43.409 Namespace ID: 1 size: 6GB 00:11:43.409 Initialization complete. 00:11:43.409 00:11:43.409 Controller QEMU NVMe Ctrl (12340 ) 00:11:43.409 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:43.409 Namespace Block Size:4096 00:11:43.409 Writing LBAs 0 to 63 with Random Data 00:11:43.409 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:43.409 LBAs matching Written Data: 64 00:11:43.409 00:11:43.409 real 0m0.319s 00:11:43.409 user 0m0.111s 00:11:43.409 sys 0m0.106s 00:11:43.409 11:15:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.409 ************************************ 00:11:43.409 END TEST nvme_simple_copy 00:11:43.409 ************************************ 00:11:43.409 11:15:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:43.668 ************************************ 00:11:43.668 END TEST nvme_scc 00:11:43.668 ************************************ 00:11:43.668 00:11:43.668 real 0m9.100s 00:11:43.668 user 0m1.528s 00:11:43.668 sys 0m2.554s 00:11:43.668 11:15:20 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.668 11:15:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:43.668 11:15:20 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:43.668 11:15:20 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:43.668 11:15:20 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:43.668 11:15:20 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:43.668 11:15:20 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:43.668 11:15:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:43.668 11:15:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.668 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.668 ************************************ 00:11:43.668 START TEST nvme_fdp 00:11:43.668 ************************************ 00:11:43.668 11:15:20 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:11:43.668 * Looking for test storage... 00:11:43.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:43.668 11:15:21 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.668 11:15:21 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.668 11:15:21 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.926 11:15:21 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.926 11:15:21 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:43.927 11:15:21 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.927 11:15:21 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.927 --rc genhtml_branch_coverage=1 00:11:43.927 --rc genhtml_function_coverage=1 00:11:43.927 --rc genhtml_legend=1 00:11:43.927 --rc geninfo_all_blocks=1 00:11:43.927 --rc geninfo_unexecuted_blocks=1 00:11:43.927 00:11:43.927 ' 00:11:43.927 11:15:21 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.927 --rc genhtml_branch_coverage=1 00:11:43.927 --rc genhtml_function_coverage=1 00:11:43.927 --rc genhtml_legend=1 00:11:43.927 --rc geninfo_all_blocks=1 00:11:43.927 --rc geninfo_unexecuted_blocks=1 00:11:43.927 00:11:43.927 ' 00:11:43.927 11:15:21 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.927 --rc genhtml_branch_coverage=1 00:11:43.927 --rc genhtml_function_coverage=1 00:11:43.927 --rc genhtml_legend=1 00:11:43.927 --rc geninfo_all_blocks=1 00:11:43.927 --rc geninfo_unexecuted_blocks=1 00:11:43.927 00:11:43.927 ' 00:11:43.927 11:15:21 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.927 --rc genhtml_branch_coverage=1 00:11:43.927 --rc genhtml_function_coverage=1 00:11:43.927 --rc genhtml_legend=1 00:11:43.927 --rc geninfo_all_blocks=1 00:11:43.927 --rc geninfo_unexecuted_blocks=1 00:11:43.927 00:11:43.927 ' 00:11:43.927 11:15:21 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.927 11:15:21 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.927 11:15:21 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.927 11:15:21 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.927 11:15:21 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.927 11:15:21 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:43.927 11:15:21 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:43.927 11:15:21 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:43.927 11:15:21 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.927 11:15:21 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:44.495 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:44.753 Waiting for block devices as requested 00:11:44.753 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.012 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.012 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.272 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.636 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:50.636 11:15:27 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:50.636 11:15:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:50.636 11:15:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:50.636 11:15:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:50.636 11:15:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:50.636 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:50.637 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:50.638 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:50.639 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.640 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.641 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:50.642 11:15:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:50.642 11:15:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:50.642 11:15:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:50.642 11:15:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.642 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.643 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.644 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.645 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:50.646 11:15:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:50.647 11:15:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:50.647 11:15:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:50.647 11:15:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:50.647 11:15:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.647 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.648 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.649 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.650 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:50.651 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.652 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:50.653 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.915 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:50.916 11:15:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:50.916 11:15:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:50.916 11:15:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:50.916 11:15:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:50.916 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.917 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.918 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:50.919 11:15:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:50.919 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:50.920 11:15:28 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:50.920 11:15:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:50.920 11:15:28 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:50.920 11:15:28 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:51.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:52.423 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.423 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.423 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.423 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.423 11:15:29 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:52.423 11:15:29 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:52.423 11:15:29 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:52.423 11:15:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:52.423 ************************************ 00:11:52.423 START TEST nvme_flexible_data_placement 00:11:52.423 ************************************ 00:11:52.423 11:15:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:52.682 Initializing NVMe Controllers 00:11:52.682 Attaching to 0000:00:13.0 00:11:52.682 Controller supports FDP Attached to 0000:00:13.0 00:11:52.682 Namespace ID: 1 Endurance Group ID: 1 00:11:52.682 Initialization complete. 00:11:52.682 00:11:52.682 ================================== 00:11:52.682 == FDP tests for Namespace: #01 == 00:11:52.682 ================================== 00:11:52.682 00:11:52.682 Get Feature: FDP: 00:11:52.682 ================= 00:11:52.682 Enabled: Yes 00:11:52.682 FDP configuration Index: 0 00:11:52.682 00:11:52.682 FDP configurations log page 00:11:52.682 =========================== 00:11:52.682 Number of FDP configurations: 1 00:11:52.682 Version: 0 00:11:52.682 Size: 112 00:11:52.682 FDP Configuration Descriptor: 0 00:11:52.682 Descriptor Size: 96 00:11:52.682 Reclaim Group Identifier format: 2 00:11:52.682 FDP Volatile Write Cache: Not Present 00:11:52.682 FDP Configuration: Valid 00:11:52.682 Vendor Specific Size: 0 00:11:52.682 Number of Reclaim Groups: 2 00:11:52.682 Number of Recalim Unit Handles: 8 00:11:52.682 Max Placement Identifiers: 128 00:11:52.682 Number of Namespaces Suppprted: 256 00:11:52.682 Reclaim unit Nominal Size: 6000000 bytes 00:11:52.682 Estimated Reclaim Unit Time Limit: Not Reported 00:11:52.682 RUH Desc #000: RUH Type: Initially Isolated 00:11:52.682 RUH Desc #001: RUH Type: Initially Isolated 00:11:52.682 RUH Desc #002: RUH Type: Initially Isolated 00:11:52.682 RUH Desc #003: RUH Type: Initially Isolated 00:11:52.683 RUH Desc #004: RUH Type: Initially Isolated 00:11:52.683 RUH Desc #005: RUH Type: Initially Isolated 00:11:52.683 RUH Desc #006: RUH Type: Initially Isolated 00:11:52.683 RUH Desc #007: RUH Type: Initially Isolated 00:11:52.683 00:11:52.683 FDP reclaim unit handle usage log page 00:11:52.683 ====================================== 00:11:52.683 Number of Reclaim Unit Handles: 8 00:11:52.683 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:52.683 RUH Usage Desc #001: RUH Attributes: Unused 00:11:52.683 RUH Usage Desc #002: RUH Attributes: Unused 00:11:52.683 RUH Usage Desc #003: RUH Attributes: Unused 00:11:52.683 RUH Usage Desc #004: RUH Attributes: Unused 00:11:52.683 RUH Usage Desc #005: RUH Attributes: Unused 00:11:52.683 RUH Usage Desc #006: RUH Attributes: Unused 00:11:52.683 RUH Usage Desc #007: RUH Attributes: Unused 00:11:52.683 00:11:52.683 FDP statistics log page 00:11:52.683 ======================= 00:11:52.683 Host bytes with metadata written: 1001299968 00:11:52.683 Media bytes with metadata written: 1001414656 00:11:52.683 Media bytes erased: 0 00:11:52.683 00:11:52.683 FDP Reclaim unit handle status 00:11:52.683 ============================== 00:11:52.683 Number of RUHS descriptors: 2 00:11:52.683 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000516 00:11:52.683 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:52.683 00:11:52.683 FDP write on placement id: 0 success 00:11:52.683 00:11:52.683 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:52.683 00:11:52.683 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:52.683 00:11:52.683 Get Feature: FDP Events for Placement handle: #0 00:11:52.683 ======================== 00:11:52.683 Number of FDP Events: 6 00:11:52.683 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:52.683 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:52.683 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:52.683 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:52.683 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:52.683 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:52.683 00:11:52.683 FDP events log page 00:11:52.683 =================== 00:11:52.683 Number of FDP events: 1 00:11:52.683 FDP Event #0: 00:11:52.683 Event Type: RU Not Written to Capacity 00:11:52.683 Placement Identifier: Valid 00:11:52.683 NSID: Valid 00:11:52.683 Location: Valid 00:11:52.683 Placement Identifier: 0 00:11:52.683 Event Timestamp: 7 00:11:52.683 Namespace Identifier: 1 00:11:52.683 Reclaim Group Identifier: 0 00:11:52.683 Reclaim Unit Handle Identifier: 0 00:11:52.683 00:11:52.683 FDP test passed 00:11:52.683 00:11:52.683 real 0m0.283s 00:11:52.683 user 0m0.093s 00:11:52.683 sys 0m0.089s 00:11:52.683 11:15:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:52.683 ************************************ 00:11:52.683 END TEST nvme_flexible_data_placement 00:11:52.683 ************************************ 00:11:52.683 11:15:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:52.942 ************************************ 00:11:52.942 END TEST nvme_fdp 00:11:52.942 ************************************ 00:11:52.942 00:11:52.942 real 0m9.189s 00:11:52.942 user 0m1.633s 00:11:52.942 sys 0m2.557s 00:11:52.942 11:15:30 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:52.942 11:15:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:52.942 11:15:30 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:52.942 11:15:30 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:52.942 11:15:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:52.942 11:15:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:52.942 11:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:52.942 ************************************ 00:11:52.942 START TEST nvme_rpc 00:11:52.942 ************************************ 00:11:52.942 11:15:30 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:52.942 * Looking for test storage... 00:11:52.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:52.942 11:15:30 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:52.942 11:15:30 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:52.942 11:15:30 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.201 11:15:30 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.201 --rc genhtml_branch_coverage=1 00:11:53.201 --rc genhtml_function_coverage=1 00:11:53.201 --rc genhtml_legend=1 00:11:53.201 --rc geninfo_all_blocks=1 00:11:53.201 --rc geninfo_unexecuted_blocks=1 00:11:53.201 00:11:53.201 ' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.201 --rc genhtml_branch_coverage=1 00:11:53.201 --rc genhtml_function_coverage=1 00:11:53.201 --rc genhtml_legend=1 00:11:53.201 --rc geninfo_all_blocks=1 00:11:53.201 --rc geninfo_unexecuted_blocks=1 00:11:53.201 00:11:53.201 ' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.201 --rc genhtml_branch_coverage=1 00:11:53.201 --rc genhtml_function_coverage=1 00:11:53.201 --rc genhtml_legend=1 00:11:53.201 --rc geninfo_all_blocks=1 00:11:53.201 --rc geninfo_unexecuted_blocks=1 00:11:53.201 00:11:53.201 ' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.201 --rc genhtml_branch_coverage=1 00:11:53.201 --rc genhtml_function_coverage=1 00:11:53.201 --rc genhtml_legend=1 00:11:53.201 --rc geninfo_all_blocks=1 00:11:53.201 --rc geninfo_unexecuted_blocks=1 00:11:53.201 00:11:53.201 ' 00:11:53.201 11:15:30 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.201 11:15:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:53.201 11:15:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:53.201 11:15:30 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67190 00:11:53.201 11:15:30 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:53.201 11:15:30 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:53.201 11:15:30 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67190 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67190 ']' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.201 11:15:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.460 [2024-11-15 11:15:30.650960] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:11:53.460 [2024-11-15 11:15:30.651085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67190 ] 00:11:53.460 [2024-11-15 11:15:30.830275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:53.719 [2024-11-15 11:15:30.948972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.719 [2024-11-15 11:15:30.949008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.655 11:15:31 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:54.655 11:15:31 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:54.655 11:15:31 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:54.914 Nvme0n1 00:11:54.914 11:15:32 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:54.914 11:15:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:55.173 request: 00:11:55.173 { 00:11:55.173 "bdev_name": "Nvme0n1", 00:11:55.173 "filename": "non_existing_file", 00:11:55.173 "method": "bdev_nvme_apply_firmware", 00:11:55.173 "req_id": 1 00:11:55.173 } 00:11:55.173 Got JSON-RPC error response 00:11:55.173 response: 00:11:55.173 { 00:11:55.173 "code": -32603, 00:11:55.173 "message": "open file failed." 00:11:55.173 } 00:11:55.173 11:15:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:55.173 11:15:32 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:55.173 11:15:32 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:55.173 11:15:32 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:55.173 11:15:32 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67190 00:11:55.173 11:15:32 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67190 ']' 00:11:55.173 11:15:32 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67190 00:11:55.173 11:15:32 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:11:55.173 11:15:32 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:55.173 11:15:32 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67190 00:11:55.432 11:15:32 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:55.432 killing process with pid 67190 00:11:55.432 11:15:32 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:55.432 11:15:32 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67190' 00:11:55.432 11:15:32 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67190 00:11:55.432 11:15:32 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67190 00:11:57.968 00:11:57.968 real 0m4.721s 00:11:57.968 user 0m8.613s 00:11:57.968 sys 0m0.806s 00:11:57.968 11:15:34 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.968 11:15:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.968 ************************************ 00:11:57.968 END TEST nvme_rpc 00:11:57.968 ************************************ 00:11:57.968 11:15:34 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:57.968 11:15:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:57.968 11:15:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.968 11:15:34 -- common/autotest_common.sh@10 -- # set +x 00:11:57.968 ************************************ 00:11:57.968 START TEST nvme_rpc_timeouts 00:11:57.968 ************************************ 00:11:57.968 11:15:34 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:57.968 * Looking for test storage... 00:11:57.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.968 11:15:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:57.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.968 --rc genhtml_branch_coverage=1 00:11:57.968 --rc genhtml_function_coverage=1 00:11:57.968 --rc genhtml_legend=1 00:11:57.968 --rc geninfo_all_blocks=1 00:11:57.968 --rc geninfo_unexecuted_blocks=1 00:11:57.968 00:11:57.968 ' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:57.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.968 --rc genhtml_branch_coverage=1 00:11:57.968 --rc genhtml_function_coverage=1 00:11:57.968 --rc genhtml_legend=1 00:11:57.968 --rc geninfo_all_blocks=1 00:11:57.968 --rc geninfo_unexecuted_blocks=1 00:11:57.968 00:11:57.968 ' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:57.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.968 --rc genhtml_branch_coverage=1 00:11:57.968 --rc genhtml_function_coverage=1 00:11:57.968 --rc genhtml_legend=1 00:11:57.968 --rc geninfo_all_blocks=1 00:11:57.968 --rc geninfo_unexecuted_blocks=1 00:11:57.968 00:11:57.968 ' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:57.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.968 --rc genhtml_branch_coverage=1 00:11:57.968 --rc genhtml_function_coverage=1 00:11:57.968 --rc genhtml_legend=1 00:11:57.968 --rc geninfo_all_blocks=1 00:11:57.968 --rc geninfo_unexecuted_blocks=1 00:11:57.968 00:11:57.968 ' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:57.968 11:15:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67266 00:11:57.968 11:15:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67266 00:11:57.968 11:15:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67304 00:11:57.968 11:15:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:57.968 11:15:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:57.968 11:15:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67304 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67304 ']' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:57.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:57.968 11:15:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:57.968 [2024-11-15 11:15:35.326271] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:11:57.968 [2024-11-15 11:15:35.326407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67304 ] 00:11:58.226 [2024-11-15 11:15:35.509314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.485 [2024-11-15 11:15:35.628300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.485 [2024-11-15 11:15:35.628333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.421 11:15:36 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:59.421 11:15:36 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:11:59.421 Checking default timeout settings: 00:11:59.421 11:15:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:59.421 11:15:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:59.679 Making settings changes with rpc: 00:11:59.679 11:15:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:59.679 11:15:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:59.679 Check default vs. modified settings: 00:11:59.679 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:59.679 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67266 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67266 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:00.247 Setting action_on_timeout is changed as expected. 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67266 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67266 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:00.247 Setting timeout_us is changed as expected. 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67266 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67266 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:00.247 Setting timeout_admin_us is changed as expected. 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67266 /tmp/settings_modified_67266 00:12:00.247 11:15:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67304 00:12:00.247 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67304 ']' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67304 00:12:00.247 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:12:00.247 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:00.247 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67304 00:12:00.248 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:00.248 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:00.248 killing process with pid 67304 00:12:00.248 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67304' 00:12:00.248 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67304 00:12:00.248 11:15:37 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67304 00:12:02.867 RPC TIMEOUT SETTING TEST PASSED. 00:12:02.867 11:15:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:02.867 00:12:02.867 real 0m5.001s 00:12:02.867 user 0m9.396s 00:12:02.867 sys 0m0.778s 00:12:02.867 11:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:02.867 11:15:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:02.867 ************************************ 00:12:02.867 END TEST nvme_rpc_timeouts 00:12:02.867 ************************************ 00:12:02.867 11:15:40 -- spdk/autotest.sh@239 -- # uname -s 00:12:02.867 11:15:40 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:02.867 11:15:40 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:02.867 11:15:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:02.867 11:15:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:02.867 11:15:40 -- common/autotest_common.sh@10 -- # set +x 00:12:02.867 ************************************ 00:12:02.867 START TEST sw_hotplug 00:12:02.867 ************************************ 00:12:02.867 11:15:40 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:02.867 * Looking for test storage... 00:12:02.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:02.867 11:15:40 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:02.867 11:15:40 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:12:02.867 11:15:40 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:02.867 11:15:40 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:02.867 11:15:40 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.867 11:15:40 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.867 11:15:40 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.867 11:15:40 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.126 11:15:40 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:03.126 11:15:40 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.126 11:15:40 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:03.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.126 --rc genhtml_branch_coverage=1 00:12:03.126 --rc genhtml_function_coverage=1 00:12:03.126 --rc genhtml_legend=1 00:12:03.126 --rc geninfo_all_blocks=1 00:12:03.126 --rc geninfo_unexecuted_blocks=1 00:12:03.126 00:12:03.126 ' 00:12:03.126 11:15:40 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:03.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.126 --rc genhtml_branch_coverage=1 00:12:03.126 --rc genhtml_function_coverage=1 00:12:03.126 --rc genhtml_legend=1 00:12:03.126 --rc geninfo_all_blocks=1 00:12:03.126 --rc geninfo_unexecuted_blocks=1 00:12:03.126 00:12:03.126 ' 00:12:03.126 11:15:40 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:03.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.126 --rc genhtml_branch_coverage=1 00:12:03.126 --rc genhtml_function_coverage=1 00:12:03.126 --rc genhtml_legend=1 00:12:03.126 --rc geninfo_all_blocks=1 00:12:03.126 --rc geninfo_unexecuted_blocks=1 00:12:03.126 00:12:03.126 ' 00:12:03.126 11:15:40 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:03.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.126 --rc genhtml_branch_coverage=1 00:12:03.126 --rc genhtml_function_coverage=1 00:12:03.126 --rc genhtml_legend=1 00:12:03.126 --rc geninfo_all_blocks=1 00:12:03.126 --rc geninfo_unexecuted_blocks=1 00:12:03.126 00:12:03.126 ' 00:12:03.126 11:15:40 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:03.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.692 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.692 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.692 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.692 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.692 11:15:41 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:03.692 11:15:41 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:03.692 11:15:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:03.692 11:15:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:03.692 11:15:41 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:03.692 11:15:41 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:03.692 11:15:41 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:03.693 11:15:41 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:03.951 11:15:41 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:03.951 11:15:41 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:03.951 11:15:41 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:03.951 11:15:41 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:04.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.781 Waiting for block devices as requested 00:12:04.781 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.781 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.781 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.040 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.308 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:10.308 11:15:47 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:10.308 11:15:47 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:10.875 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:10.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:10.875 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:11.134 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:11.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.700 11:15:48 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:11.700 11:15:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68191 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:11.700 11:15:49 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:11.700 11:15:49 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:11.700 11:15:49 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:11.700 11:15:49 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:11.700 11:15:49 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:11.700 11:15:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:11.959 Initializing NVMe Controllers 00:12:11.959 Attaching to 0000:00:10.0 00:12:11.959 Attaching to 0000:00:11.0 00:12:11.959 Attached to 0000:00:11.0 00:12:11.959 Attached to 0000:00:10.0 00:12:11.959 Initialization complete. Starting I/O... 00:12:11.959 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:11.959 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:11.959 00:12:12.895 QEMU NVMe Ctrl (12341 ): 1556 I/Os completed (+1556) 00:12:12.895 QEMU NVMe Ctrl (12340 ): 1556 I/Os completed (+1556) 00:12:12.895 00:12:14.271 QEMU NVMe Ctrl (12341 ): 3704 I/Os completed (+2148) 00:12:14.271 QEMU NVMe Ctrl (12340 ): 3704 I/Os completed (+2148) 00:12:14.271 00:12:15.207 QEMU NVMe Ctrl (12341 ): 5920 I/Os completed (+2216) 00:12:15.207 QEMU NVMe Ctrl (12340 ): 5920 I/Os completed (+2216) 00:12:15.207 00:12:16.142 QEMU NVMe Ctrl (12341 ): 8120 I/Os completed (+2200) 00:12:16.142 QEMU NVMe Ctrl (12340 ): 8120 I/Os completed (+2200) 00:12:16.142 00:12:17.104 QEMU NVMe Ctrl (12341 ): 10332 I/Os completed (+2212) 00:12:17.104 QEMU NVMe Ctrl (12340 ): 10332 I/Os completed (+2212) 00:12:17.104 00:12:17.690 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.690 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.690 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.690 [2024-11-15 11:15:55.040522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:17.691 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:17.691 [2024-11-15 11:15:55.042736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.042900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.042956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.043069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:17.691 [2024-11-15 11:15:55.045910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.046058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.046110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.046200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.691 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.691 [2024-11-15 11:15:55.075498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:17.691 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:17.691 [2024-11-15 11:15:55.077338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.077437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.077470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.077492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:17.691 [2024-11-15 11:15:55.080177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.080225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.080247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 [2024-11-15 11:15:55.080264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.691 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_vendor 00:12:17.947 EAL: Scan for (pci) bus failed. 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:17.947 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:17.947 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:17.947 Attaching to 0000:00:10.0 00:12:17.947 Attached to 0000:00:10.0 00:12:18.205 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:18.205 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.205 11:15:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.205 Attaching to 0000:00:11.0 00:12:18.205 Attached to 0000:00:11.0 00:12:19.139 QEMU NVMe Ctrl (12340 ): 2108 I/Os completed (+2108) 00:12:19.139 QEMU NVMe Ctrl (12341 ): 1900 I/Os completed (+1900) 00:12:19.139 00:12:20.074 QEMU NVMe Ctrl (12340 ): 4284 I/Os completed (+2176) 00:12:20.074 QEMU NVMe Ctrl (12341 ): 4076 I/Os completed (+2176) 00:12:20.074 00:12:21.009 QEMU NVMe Ctrl (12340 ): 6476 I/Os completed (+2192) 00:12:21.009 QEMU NVMe Ctrl (12341 ): 6268 I/Os completed (+2192) 00:12:21.009 00:12:21.945 QEMU NVMe Ctrl (12340 ): 8648 I/Os completed (+2172) 00:12:21.945 QEMU NVMe Ctrl (12341 ): 8440 I/Os completed (+2172) 00:12:21.945 00:12:22.882 QEMU NVMe Ctrl (12340 ): 10836 I/Os completed (+2188) 00:12:22.882 QEMU NVMe Ctrl (12341 ): 10628 I/Os completed (+2188) 00:12:22.882 00:12:24.259 QEMU NVMe Ctrl (12340 ): 13024 I/Os completed (+2188) 00:12:24.259 QEMU NVMe Ctrl (12341 ): 12816 I/Os completed (+2188) 00:12:24.259 00:12:25.196 QEMU NVMe Ctrl (12340 ): 15184 I/Os completed (+2160) 00:12:25.196 QEMU NVMe Ctrl (12341 ): 14976 I/Os completed (+2160) 00:12:25.196 00:12:26.133 QEMU NVMe Ctrl (12340 ): 17344 I/Os completed (+2160) 00:12:26.133 QEMU NVMe Ctrl (12341 ): 17137 I/Os completed (+2161) 00:12:26.133 00:12:27.070 QEMU NVMe Ctrl (12340 ): 19512 I/Os completed (+2168) 00:12:27.070 QEMU NVMe Ctrl (12341 ): 19305 I/Os completed (+2168) 00:12:27.070 00:12:28.005 QEMU NVMe Ctrl (12340 ): 21700 I/Os completed (+2188) 00:12:28.005 QEMU NVMe Ctrl (12341 ): 21493 I/Os completed (+2188) 00:12:28.005 00:12:28.943 QEMU NVMe Ctrl (12340 ): 23896 I/Os completed (+2196) 00:12:28.943 QEMU NVMe Ctrl (12341 ): 23689 I/Os completed (+2196) 00:12:28.943 00:12:29.877 QEMU NVMe Ctrl (12340 ): 26084 I/Os completed (+2188) 00:12:29.877 QEMU NVMe Ctrl (12341 ): 25877 I/Os completed (+2188) 00:12:29.877 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.135 [2024-11-15 11:16:07.400185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:30.135 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:30.135 [2024-11-15 11:16:07.401925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.401985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.402007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.402034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:30.135 [2024-11-15 11:16:07.405004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.405058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.405077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.405097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.135 [2024-11-15 11:16:07.436925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:30.135 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:30.135 [2024-11-15 11:16:07.438476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.438525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.438551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.438582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:30.135 [2024-11-15 11:16:07.441065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.441104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.441125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 [2024-11-15 11:16:07.441144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:30.135 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:30.394 Attaching to 0000:00:10.0 00:12:30.394 Attached to 0000:00:10.0 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:30.394 11:16:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:30.394 Attaching to 0000:00:11.0 00:12:30.394 Attached to 0000:00:11.0 00:12:30.961 QEMU NVMe Ctrl (12340 ): 1295 I/Os completed (+1295) 00:12:30.961 QEMU NVMe Ctrl (12341 ): 1056 I/Os completed (+1056) 00:12:30.961 00:12:31.895 QEMU NVMe Ctrl (12340 ): 3471 I/Os completed (+2176) 00:12:31.895 QEMU NVMe Ctrl (12341 ): 3232 I/Os completed (+2176) 00:12:31.895 00:12:33.273 QEMU NVMe Ctrl (12340 ): 5631 I/Os completed (+2160) 00:12:33.273 QEMU NVMe Ctrl (12341 ): 5392 I/Os completed (+2160) 00:12:33.273 00:12:33.841 QEMU NVMe Ctrl (12340 ): 7795 I/Os completed (+2164) 00:12:33.841 QEMU NVMe Ctrl (12341 ): 7556 I/Os completed (+2164) 00:12:33.841 00:12:35.220 QEMU NVMe Ctrl (12340 ): 9963 I/Os completed (+2168) 00:12:35.220 QEMU NVMe Ctrl (12341 ): 9724 I/Os completed (+2168) 00:12:35.220 00:12:36.157 QEMU NVMe Ctrl (12340 ): 12139 I/Os completed (+2176) 00:12:36.157 QEMU NVMe Ctrl (12341 ): 11900 I/Os completed (+2176) 00:12:36.157 00:12:37.093 QEMU NVMe Ctrl (12340 ): 14283 I/Os completed (+2144) 00:12:37.093 QEMU NVMe Ctrl (12341 ): 14046 I/Os completed (+2146) 00:12:37.093 00:12:38.029 QEMU NVMe Ctrl (12340 ): 16479 I/Os completed (+2196) 00:12:38.029 QEMU NVMe Ctrl (12341 ): 16242 I/Os completed (+2196) 00:12:38.029 00:12:38.966 QEMU NVMe Ctrl (12340 ): 18663 I/Os completed (+2184) 00:12:38.966 QEMU NVMe Ctrl (12341 ): 18426 I/Os completed (+2184) 00:12:38.966 00:12:39.903 QEMU NVMe Ctrl (12340 ): 20867 I/Os completed (+2204) 00:12:39.903 QEMU NVMe Ctrl (12341 ): 20630 I/Os completed (+2204) 00:12:39.903 00:12:40.840 QEMU NVMe Ctrl (12340 ): 23047 I/Os completed (+2180) 00:12:40.840 QEMU NVMe Ctrl (12341 ): 22812 I/Os completed (+2182) 00:12:40.840 00:12:42.214 QEMU NVMe Ctrl (12340 ): 25219 I/Os completed (+2172) 00:12:42.214 QEMU NVMe Ctrl (12341 ): 24984 I/Os completed (+2172) 00:12:42.214 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.473 [2024-11-15 11:16:19.755659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:42.473 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:42.473 [2024-11-15 11:16:19.757394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.757456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.757478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.757502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:42.473 [2024-11-15 11:16:19.760327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.760381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.760399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.760419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.473 [2024-11-15 11:16:19.793570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:42.473 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:42.473 [2024-11-15 11:16:19.795115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.795166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.795189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.795208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:42.473 [2024-11-15 11:16:19.797711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.797756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.797779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 [2024-11-15 11:16:19.797797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:42.473 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:42.732 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.732 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.732 11:16:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:42.732 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:42.732 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.732 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.732 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.732 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:42.732 Attaching to 0000:00:10.0 00:12:42.732 Attached to 0000:00:10.0 00:12:42.732 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:42.991 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.991 11:16:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:42.991 Attaching to 0000:00:11.0 00:12:42.991 Attached to 0000:00:11.0 00:12:42.991 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:42.991 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:42.991 [2024-11-15 11:16:20.148706] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:55.201 11:16:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:55.201 11:16:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:55.201 11:16:32 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.11 00:12:55.201 11:16:32 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.11 00:12:55.201 11:16:32 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:55.201 11:16:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.11 00:12:55.201 11:16:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.11 2 00:12:55.201 remove_attach_helper took 43.11s to complete (handling 2 nvme drive(s)) 11:16:32 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68191 00:13:01.770 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68191) - No such process 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68191 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68736 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:01.770 11:16:38 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68736 00:13:01.770 11:16:38 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 68736 ']' 00:13:01.770 11:16:38 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.770 11:16:38 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.770 11:16:38 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.770 11:16:38 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.770 11:16:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:01.770 [2024-11-15 11:16:38.260089] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:13:01.770 [2024-11-15 11:16:38.260224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68736 ] 00:13:01.770 [2024-11-15 11:16:38.437077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.770 [2024-11-15 11:16:38.547577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.029 11:16:39 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.029 11:16:39 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:13:02.029 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:02.029 11:16:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.029 11:16:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.301 11:16:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:02.301 11:16:39 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:02.301 11:16:39 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:02.301 11:16:39 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:02.301 11:16:39 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:02.301 11:16:39 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:02.301 11:16:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:08.878 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:08.878 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.879 11:16:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.879 11:16:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.879 [2024-11-15 11:16:45.518580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:08.879 [2024-11-15 11:16:45.521338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:45.521397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:45.521419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 [2024-11-15 11:16:45.521447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:45.521459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:45.521474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 [2024-11-15 11:16:45.521488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:45.521502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:45.521513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 [2024-11-15 11:16:45.521531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:45.521542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:45.521567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 11:16:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:08.879 11:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:08.879 [2024-11-15 11:16:46.017762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:08.879 [2024-11-15 11:16:46.020095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:46.020135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:46.020171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 [2024-11-15 11:16:46.020195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:46.020210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:46.020222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 [2024-11-15 11:16:46.020237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:46.020248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:46.020263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 [2024-11-15 11:16:46.020276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.879 [2024-11-15 11:16:46.020290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.879 [2024-11-15 11:16:46.020301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.879 11:16:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.879 11:16:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.879 11:16:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:08.879 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.138 11:16:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.343 11:16:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.343 11:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.343 11:16:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:21.343 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:21.343 [2024-11-15 11:16:58.497749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:21.343 [2024-11-15 11:16:58.500457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.343 [2024-11-15 11:16:58.500505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.343 [2024-11-15 11:16:58.500523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.343 [2024-11-15 11:16:58.500550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.344 [2024-11-15 11:16:58.500575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.344 [2024-11-15 11:16:58.500590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.344 [2024-11-15 11:16:58.500604] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.344 [2024-11-15 11:16:58.500617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.344 [2024-11-15 11:16:58.500629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.344 [2024-11-15 11:16:58.500644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.344 [2024-11-15 11:16:58.500655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.344 [2024-11-15 11:16:58.500669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.344 11:16:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.344 11:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.344 11:16:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:21.344 11:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:21.910 [2024-11-15 11:16:59.096798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:21.910 [2024-11-15 11:16:59.099189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.910 [2024-11-15 11:16:59.099230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.910 [2024-11-15 11:16:59.099268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.910 [2024-11-15 11:16:59.099291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.910 [2024-11-15 11:16:59.099305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.910 [2024-11-15 11:16:59.099317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.910 [2024-11-15 11:16:59.099332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.910 [2024-11-15 11:16:59.099343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.910 [2024-11-15 11:16:59.099357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.911 [2024-11-15 11:16:59.099371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.911 [2024-11-15 11:16:59.099384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.911 [2024-11-15 11:16:59.099395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.911 11:16:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.911 11:16:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.911 11:16:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:21.911 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:22.169 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:22.169 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.169 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.169 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.169 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:22.169 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:22.170 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.170 11:16:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:34.407 11:17:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.407 11:17:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:34.407 11:17:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.407 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.407 [2024-11-15 11:17:11.576752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:34.407 [2024-11-15 11:17:11.579525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.407 [2024-11-15 11:17:11.579582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.408 [2024-11-15 11:17:11.579601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.408 [2024-11-15 11:17:11.579626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.408 [2024-11-15 11:17:11.579638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.408 [2024-11-15 11:17:11.579656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.408 [2024-11-15 11:17:11.579670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.408 [2024-11-15 11:17:11.579684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.408 [2024-11-15 11:17:11.579696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.408 [2024-11-15 11:17:11.579710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.408 [2024-11-15 11:17:11.579721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.408 [2024-11-15 11:17:11.579735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:34.408 11:17:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.408 11:17:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:34.408 11:17:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:34.408 11:17:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:34.666 [2024-11-15 11:17:11.976113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:34.666 [2024-11-15 11:17:11.978923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.666 [2024-11-15 11:17:11.978978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.666 [2024-11-15 11:17:11.978998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.666 [2024-11-15 11:17:11.979036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.666 [2024-11-15 11:17:11.979051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.666 [2024-11-15 11:17:11.979063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.666 [2024-11-15 11:17:11.979079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.666 [2024-11-15 11:17:11.979090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.666 [2024-11-15 11:17:11.979108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.666 [2024-11-15 11:17:11.979120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.666 [2024-11-15 11:17:11.979133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.666 [2024-11-15 11:17:11.979145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:34.926 11:17:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.926 11:17:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:34.926 11:17:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:34.926 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.185 11:17:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.15 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.15 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:13:47.430 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:47.430 11:17:24 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:47.430 11:17:24 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.122 11:17:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.122 11:17:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.122 [2024-11-15 11:17:30.704918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:54.122 [2024-11-15 11:17:30.707159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:30.707207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:30.707224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 [2024-11-15 11:17:30.707251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:30.707263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:30.707278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 [2024-11-15 11:17:30.707292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:30.707306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:30.707317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 [2024-11-15 11:17:30.707333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:30.707344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:30.707360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 11:17:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:54.122 11:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:54.122 [2024-11-15 11:17:31.104253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:54.122 [2024-11-15 11:17:31.105910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:31.105950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:31.105970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 [2024-11-15 11:17:31.105992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:31.106006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:31.106017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 [2024-11-15 11:17:31.106034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:31.106045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:31.106059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 [2024-11-15 11:17:31.106073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.122 [2024-11-15 11:17:31.106088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.122 [2024-11-15 11:17:31.106100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.122 11:17:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.122 11:17:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.122 11:17:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:54.122 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:54.381 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.381 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.381 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.381 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:54.381 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:54.381 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.381 11:17:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.593 11:17:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.593 11:17:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.593 11:17:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:06.593 11:17:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.593 11:17:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:06.593 [2024-11-15 11:17:43.783893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:06.593 [2024-11-15 11:17:43.786253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.593 [2024-11-15 11:17:43.786303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.593 [2024-11-15 11:17:43.786320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.593 [2024-11-15 11:17:43.786356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.593 [2024-11-15 11:17:43.786368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.593 [2024-11-15 11:17:43.786383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.593 [2024-11-15 11:17:43.786397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.593 [2024-11-15 11:17:43.786411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.593 [2024-11-15 11:17:43.786422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.593 [2024-11-15 11:17:43.786437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.593 [2024-11-15 11:17:43.786449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.593 [2024-11-15 11:17:43.786463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.593 11:17:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:06.593 11:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:06.852 [2024-11-15 11:17:44.183287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:06.852 [2024-11-15 11:17:44.185464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.852 [2024-11-15 11:17:44.185507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.852 [2024-11-15 11:17:44.185526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.852 [2024-11-15 11:17:44.185549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.852 [2024-11-15 11:17:44.185576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.852 [2024-11-15 11:17:44.185588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.852 [2024-11-15 11:17:44.185604] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.852 [2024-11-15 11:17:44.185616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.852 [2024-11-15 11:17:44.185630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.852 [2024-11-15 11:17:44.185642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.852 [2024-11-15 11:17:44.185656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.852 [2024-11-15 11:17:44.185667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.112 11:17:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.112 11:17:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.112 11:17:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.112 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.371 11:17:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:19.583 11:17:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.583 11:17:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.583 11:17:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:19.583 [2024-11-15 11:17:56.763055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:19.583 [2024-11-15 11:17:56.765756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.583 [2024-11-15 11:17:56.765802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.583 [2024-11-15 11:17:56.765820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.583 [2024-11-15 11:17:56.765849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.583 [2024-11-15 11:17:56.765861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.583 [2024-11-15 11:17:56.765876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.583 [2024-11-15 11:17:56.765889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.583 [2024-11-15 11:17:56.765906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.583 [2024-11-15 11:17:56.765919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.583 [2024-11-15 11:17:56.765934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.583 [2024-11-15 11:17:56.765945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.583 [2024-11-15 11:17:56.765959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:19.583 11:17:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.583 11:17:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:19.583 11:17:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:19.583 11:17:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:19.842 [2024-11-15 11:17:57.162408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:19.842 [2024-11-15 11:17:57.166798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.842 [2024-11-15 11:17:57.166841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.842 [2024-11-15 11:17:57.166860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.842 [2024-11-15 11:17:57.166900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.842 [2024-11-15 11:17:57.166914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.842 [2024-11-15 11:17:57.166926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.842 [2024-11-15 11:17:57.166942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.842 [2024-11-15 11:17:57.166953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.842 [2024-11-15 11:17:57.166967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.842 [2024-11-15 11:17:57.166980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.842 [2024-11-15 11:17:57.166997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.842 [2024-11-15 11:17:57.167008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:20.100 11:17:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.100 11:17:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:20.100 11:17:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:20.100 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:20.358 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:20.359 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:20.359 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:20.359 11:17:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.17 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.17 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:14:32.589 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:32.589 11:18:09 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68736 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 68736 ']' 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 68736 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68736 00:14:32.589 killing process with pid 68736 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68736' 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@971 -- # kill 68736 00:14:32.589 11:18:09 sw_hotplug -- common/autotest_common.sh@976 -- # wait 68736 00:14:35.122 11:18:12 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:35.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:35.947 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:35.947 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:35.947 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:35.947 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:36.207 00:14:36.207 real 2m33.323s 00:14:36.207 user 1m51.084s 00:14:36.207 sys 0m22.405s 00:14:36.207 11:18:13 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.207 11:18:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:36.207 ************************************ 00:14:36.207 END TEST sw_hotplug 00:14:36.207 ************************************ 00:14:36.207 11:18:13 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:36.207 11:18:13 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:36.207 11:18:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:36.207 11:18:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.207 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.207 ************************************ 00:14:36.207 START TEST nvme_xnvme 00:14:36.207 ************************************ 00:14:36.207 11:18:13 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:36.207 * Looking for test storage... 00:14:36.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:36.207 11:18:13 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:36.207 11:18:13 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:36.207 11:18:13 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:36.466 11:18:13 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.466 11:18:13 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:36.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.467 --rc genhtml_branch_coverage=1 00:14:36.467 --rc genhtml_function_coverage=1 00:14:36.467 --rc genhtml_legend=1 00:14:36.467 --rc geninfo_all_blocks=1 00:14:36.467 --rc geninfo_unexecuted_blocks=1 00:14:36.467 00:14:36.467 ' 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:36.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.467 --rc genhtml_branch_coverage=1 00:14:36.467 --rc genhtml_function_coverage=1 00:14:36.467 --rc genhtml_legend=1 00:14:36.467 --rc geninfo_all_blocks=1 00:14:36.467 --rc geninfo_unexecuted_blocks=1 00:14:36.467 00:14:36.467 ' 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:36.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.467 --rc genhtml_branch_coverage=1 00:14:36.467 --rc genhtml_function_coverage=1 00:14:36.467 --rc genhtml_legend=1 00:14:36.467 --rc geninfo_all_blocks=1 00:14:36.467 --rc geninfo_unexecuted_blocks=1 00:14:36.467 00:14:36.467 ' 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:36.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.467 --rc genhtml_branch_coverage=1 00:14:36.467 --rc genhtml_function_coverage=1 00:14:36.467 --rc genhtml_legend=1 00:14:36.467 --rc geninfo_all_blocks=1 00:14:36.467 --rc geninfo_unexecuted_blocks=1 00:14:36.467 00:14:36.467 ' 00:14:36.467 11:18:13 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.467 11:18:13 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.467 11:18:13 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.467 11:18:13 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.467 11:18:13 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.467 11:18:13 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:36.467 11:18:13 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.467 11:18:13 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.467 11:18:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:36.467 ************************************ 00:14:36.467 START TEST xnvme_to_malloc_dd_copy 00:14:36.467 ************************************ 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:36.467 11:18:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:36.467 { 00:14:36.467 "subsystems": [ 00:14:36.467 { 00:14:36.467 "subsystem": "bdev", 00:14:36.467 "config": [ 00:14:36.467 { 00:14:36.467 "params": { 00:14:36.467 "block_size": 512, 00:14:36.467 "num_blocks": 2097152, 00:14:36.467 "name": "malloc0" 00:14:36.467 }, 00:14:36.467 "method": "bdev_malloc_create" 00:14:36.467 }, 00:14:36.467 { 00:14:36.467 "params": { 00:14:36.467 "io_mechanism": "libaio", 00:14:36.467 "filename": "/dev/nullb0", 00:14:36.467 "name": "null0" 00:14:36.467 }, 00:14:36.467 "method": "bdev_xnvme_create" 00:14:36.467 }, 00:14:36.467 { 00:14:36.467 "method": "bdev_wait_for_examine" 00:14:36.467 } 00:14:36.467 ] 00:14:36.467 } 00:14:36.467 ] 00:14:36.467 } 00:14:36.467 [2024-11-15 11:18:13.824431] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:14:36.467 [2024-11-15 11:18:13.824556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70096 ] 00:14:36.726 [2024-11-15 11:18:14.004646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.726 [2024-11-15 11:18:14.119698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.282  [2024-11-15T11:18:17.697Z] Copying: 263/1024 [MB] (263 MBps) [2024-11-15T11:18:18.632Z] Copying: 526/1024 [MB] (263 MBps) [2024-11-15T11:18:19.567Z] Copying: 788/1024 [MB] (261 MBps) [2024-11-15T11:18:23.773Z] Copying: 1024/1024 [MB] (average 261 MBps) 00:14:46.372 00:14:46.372 11:18:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:46.372 11:18:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:46.372 11:18:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:46.372 11:18:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:46.372 { 00:14:46.372 "subsystems": [ 00:14:46.372 { 00:14:46.372 "subsystem": "bdev", 00:14:46.372 "config": [ 00:14:46.372 { 00:14:46.372 "params": { 00:14:46.372 "block_size": 512, 00:14:46.372 "num_blocks": 2097152, 00:14:46.372 "name": "malloc0" 00:14:46.372 }, 00:14:46.372 "method": "bdev_malloc_create" 00:14:46.372 }, 00:14:46.372 { 00:14:46.372 "params": { 00:14:46.372 "io_mechanism": "libaio", 00:14:46.372 "filename": "/dev/nullb0", 00:14:46.372 "name": "null0" 00:14:46.372 }, 00:14:46.372 "method": "bdev_xnvme_create" 00:14:46.372 }, 00:14:46.372 { 00:14:46.372 "method": "bdev_wait_for_examine" 00:14:46.372 } 00:14:46.372 ] 00:14:46.372 } 00:14:46.372 ] 00:14:46.372 } 00:14:46.372 [2024-11-15 11:18:23.433302] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:14:46.372 [2024-11-15 11:18:23.433430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70205 ] 00:14:46.372 [2024-11-15 11:18:23.612565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.372 [2024-11-15 11:18:23.718258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.907  [2024-11-15T11:18:27.244Z] Copying: 262/1024 [MB] (262 MBps) [2024-11-15T11:18:28.179Z] Copying: 519/1024 [MB] (256 MBps) [2024-11-15T11:18:29.115Z] Copying: 779/1024 [MB] (260 MBps) [2024-11-15T11:18:33.307Z] Copying: 1024/1024 [MB] (average 261 MBps) 00:14:55.906 00:14:55.906 11:18:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:55.906 11:18:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:55.906 11:18:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:55.906 11:18:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:55.906 11:18:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:55.906 11:18:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:55.906 { 00:14:55.906 "subsystems": [ 00:14:55.906 { 00:14:55.906 "subsystem": "bdev", 00:14:55.906 "config": [ 00:14:55.906 { 00:14:55.906 "params": { 00:14:55.906 "block_size": 512, 00:14:55.906 "num_blocks": 2097152, 00:14:55.906 "name": "malloc0" 00:14:55.906 }, 00:14:55.906 "method": "bdev_malloc_create" 00:14:55.906 }, 00:14:55.906 { 00:14:55.906 "params": { 00:14:55.906 "io_mechanism": "io_uring", 00:14:55.906 "filename": "/dev/nullb0", 00:14:55.906 "name": "null0" 00:14:55.906 }, 00:14:55.906 "method": "bdev_xnvme_create" 00:14:55.906 }, 00:14:55.906 { 00:14:55.906 "method": "bdev_wait_for_examine" 00:14:55.906 } 00:14:55.906 ] 00:14:55.906 } 00:14:55.906 ] 00:14:55.906 } 00:14:55.906 [2024-11-15 11:18:32.973047] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:14:55.906 [2024-11-15 11:18:32.973177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70315 ] 00:14:55.906 [2024-11-15 11:18:33.154418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.906 [2024-11-15 11:18:33.263826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.440  [2024-11-15T11:18:36.777Z] Copying: 278/1024 [MB] (278 MBps) [2024-11-15T11:18:37.712Z] Copying: 555/1024 [MB] (277 MBps) [2024-11-15T11:18:38.648Z] Copying: 831/1024 [MB] (275 MBps) [2024-11-15T11:18:42.839Z] Copying: 1024/1024 [MB] (average 277 MBps) 00:15:05.438 00:15:05.438 11:18:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:05.438 11:18:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:05.438 11:18:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:05.438 11:18:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:05.438 { 00:15:05.438 "subsystems": [ 00:15:05.438 { 00:15:05.438 "subsystem": "bdev", 00:15:05.438 "config": [ 00:15:05.438 { 00:15:05.438 "params": { 00:15:05.438 "block_size": 512, 00:15:05.438 "num_blocks": 2097152, 00:15:05.438 "name": "malloc0" 00:15:05.438 }, 00:15:05.438 "method": "bdev_malloc_create" 00:15:05.438 }, 00:15:05.438 { 00:15:05.438 "params": { 00:15:05.438 "io_mechanism": "io_uring", 00:15:05.438 "filename": "/dev/nullb0", 00:15:05.438 "name": "null0" 00:15:05.439 }, 00:15:05.439 "method": "bdev_xnvme_create" 00:15:05.439 }, 00:15:05.439 { 00:15:05.439 "method": "bdev_wait_for_examine" 00:15:05.439 } 00:15:05.439 ] 00:15:05.439 } 00:15:05.439 ] 00:15:05.439 } 00:15:05.439 [2024-11-15 11:18:42.320842] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:15:05.439 [2024-11-15 11:18:42.320958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70424 ] 00:15:05.439 [2024-11-15 11:18:42.501983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.439 [2024-11-15 11:18:42.616611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.044  [2024-11-15T11:18:46.011Z] Copying: 281/1024 [MB] (281 MBps) [2024-11-15T11:18:47.389Z] Copying: 558/1024 [MB] (276 MBps) [2024-11-15T11:18:47.955Z] Copying: 837/1024 [MB] (278 MBps) [2024-11-15T11:18:52.145Z] Copying: 1024/1024 [MB] (average 279 MBps) 00:15:14.745 00:15:14.745 11:18:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:14.745 11:18:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:14.745 00:15:14.745 real 0m37.862s 00:15:14.745 user 0m33.086s 00:15:14.745 sys 0m4.287s 00:15:14.745 11:18:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:14.745 11:18:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:14.745 ************************************ 00:15:14.745 END TEST xnvme_to_malloc_dd_copy 00:15:14.745 ************************************ 00:15:14.745 11:18:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:14.745 11:18:51 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:14.745 11:18:51 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:14.745 11:18:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.745 ************************************ 00:15:14.745 START TEST xnvme_bdevperf 00:15:14.745 ************************************ 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:14.745 11:18:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.745 { 00:15:14.745 "subsystems": [ 00:15:14.745 { 00:15:14.745 "subsystem": "bdev", 00:15:14.745 "config": [ 00:15:14.745 { 00:15:14.745 "params": { 00:15:14.745 "io_mechanism": "libaio", 00:15:14.745 "filename": "/dev/nullb0", 00:15:14.745 "name": "null0" 00:15:14.745 }, 00:15:14.745 "method": "bdev_xnvme_create" 00:15:14.745 }, 00:15:14.745 { 00:15:14.745 "method": "bdev_wait_for_examine" 00:15:14.745 } 00:15:14.745 ] 00:15:14.745 } 00:15:14.745 ] 00:15:14.745 } 00:15:14.745 [2024-11-15 11:18:51.767537] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:15:14.745 [2024-11-15 11:18:51.767656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70550 ] 00:15:14.745 [2024-11-15 11:18:51.938460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.745 [2024-11-15 11:18:52.053773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.313 Running I/O for 5 seconds... 00:15:17.185 154048.00 IOPS, 601.75 MiB/s [2024-11-15T11:18:55.552Z] 154688.00 IOPS, 604.25 MiB/s [2024-11-15T11:18:56.488Z] 154474.67 IOPS, 603.42 MiB/s [2024-11-15T11:18:57.419Z] 154848.00 IOPS, 604.88 MiB/s 00:15:20.018 Latency(us) 00:15:20.018 [2024-11-15T11:18:57.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.018 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:20.018 null0 : 5.00 154525.10 603.61 0.00 0.00 411.70 126.66 1908.18 00:15:20.018 [2024-11-15T11:18:57.419Z] =================================================================================================================== 00:15:20.018 [2024-11-15T11:18:57.419Z] Total : 154525.10 603.61 0.00 0.00 411.70 126.66 1908.18 00:15:21.394 11:18:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:21.394 11:18:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:21.394 11:18:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:21.394 11:18:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:21.394 11:18:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.394 11:18:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.394 { 00:15:21.394 "subsystems": [ 00:15:21.394 { 00:15:21.394 "subsystem": "bdev", 00:15:21.394 "config": [ 00:15:21.394 { 00:15:21.394 "params": { 00:15:21.394 "io_mechanism": "io_uring", 00:15:21.394 "filename": "/dev/nullb0", 00:15:21.394 "name": "null0" 00:15:21.394 }, 00:15:21.394 "method": "bdev_xnvme_create" 00:15:21.394 }, 00:15:21.394 { 00:15:21.394 "method": "bdev_wait_for_examine" 00:15:21.394 } 00:15:21.394 ] 00:15:21.394 } 00:15:21.394 ] 00:15:21.394 } 00:15:21.394 [2024-11-15 11:18:58.652529] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:15:21.394 [2024-11-15 11:18:58.652652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70631 ] 00:15:21.653 [2024-11-15 11:18:58.833284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.653 [2024-11-15 11:18:58.944710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.912 Running I/O for 5 seconds... 00:15:24.224 182784.00 IOPS, 714.00 MiB/s [2024-11-15T11:19:02.561Z] 190368.00 IOPS, 743.62 MiB/s [2024-11-15T11:19:03.495Z] 192618.67 IOPS, 752.42 MiB/s [2024-11-15T11:19:04.430Z] 192448.00 IOPS, 751.75 MiB/s 00:15:27.029 Latency(us) 00:15:27.029 [2024-11-15T11:19:04.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.029 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:27.029 null0 : 5.00 193845.65 757.21 0.00 0.00 327.65 185.88 1750.26 00:15:27.029 [2024-11-15T11:19:04.430Z] =================================================================================================================== 00:15:27.029 [2024-11-15T11:19:04.430Z] Total : 193845.65 757.21 0.00 0.00 327.65 185.88 1750.26 00:15:28.002 11:19:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:28.002 11:19:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:28.292 00:15:28.292 real 0m13.766s 00:15:28.292 user 0m10.339s 00:15:28.292 sys 0m3.206s 00:15:28.292 11:19:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.292 11:19:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.292 ************************************ 00:15:28.292 END TEST xnvme_bdevperf 00:15:28.292 ************************************ 00:15:28.292 00:15:28.292 real 0m52.016s 00:15:28.292 user 0m43.602s 00:15:28.292 sys 0m7.708s 00:15:28.292 11:19:05 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.292 11:19:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.292 ************************************ 00:15:28.292 END TEST nvme_xnvme 00:15:28.292 ************************************ 00:15:28.292 11:19:05 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:28.292 11:19:05 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:28.292 11:19:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.292 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:15:28.292 ************************************ 00:15:28.292 START TEST blockdev_xnvme 00:15:28.292 ************************************ 00:15:28.292 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:28.292 * Looking for test storage... 00:15:28.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:28.292 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:28.292 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:15:28.292 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.551 11:19:05 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.551 --rc genhtml_branch_coverage=1 00:15:28.551 --rc genhtml_function_coverage=1 00:15:28.551 --rc genhtml_legend=1 00:15:28.551 --rc geninfo_all_blocks=1 00:15:28.551 --rc geninfo_unexecuted_blocks=1 00:15:28.551 00:15:28.551 ' 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.551 --rc genhtml_branch_coverage=1 00:15:28.551 --rc genhtml_function_coverage=1 00:15:28.551 --rc genhtml_legend=1 00:15:28.551 --rc geninfo_all_blocks=1 00:15:28.551 --rc geninfo_unexecuted_blocks=1 00:15:28.551 00:15:28.551 ' 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.551 --rc genhtml_branch_coverage=1 00:15:28.551 --rc genhtml_function_coverage=1 00:15:28.551 --rc genhtml_legend=1 00:15:28.551 --rc geninfo_all_blocks=1 00:15:28.551 --rc geninfo_unexecuted_blocks=1 00:15:28.551 00:15:28.551 ' 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:28.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.551 --rc genhtml_branch_coverage=1 00:15:28.551 --rc genhtml_function_coverage=1 00:15:28.551 --rc genhtml_legend=1 00:15:28.551 --rc geninfo_all_blocks=1 00:15:28.551 --rc geninfo_unexecuted_blocks=1 00:15:28.551 00:15:28.551 ' 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70784 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70784 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 70784 ']' 00:15:28.551 11:19:05 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.551 11:19:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.551 [2024-11-15 11:19:05.884243] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:15:28.551 [2024-11-15 11:19:05.884357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:15:28.810 [2024-11-15 11:19:06.064836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.810 [2024-11-15 11:19:06.174898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.746 11:19:07 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.746 11:19:07 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:15:29.746 11:19:07 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:29.746 11:19:07 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:29.746 11:19:07 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:29.746 11:19:07 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:29.746 11:19:07 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:30.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:30.574 Waiting for block devices as requested 00:15:30.574 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:30.832 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:30.832 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:31.090 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:36.362 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:36.362 11:19:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.362 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:36.363 nvme0n1 00:15:36.363 nvme1n1 00:15:36.363 nvme2n1 00:15:36.363 nvme2n2 00:15:36.363 nvme2n3 00:15:36.363 nvme3n1 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3fa09a80-fceb-4781-accc-db24a755e6d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3fa09a80-fceb-4781-accc-db24a755e6d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0edcaa81-642f-4602-b369-e14e5960a42b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0edcaa81-642f-4602-b369-e14e5960a42b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cd55ff7b-d6de-40e3-8cab-0988afc53073"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd55ff7b-d6de-40e3-8cab-0988afc53073",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "553c00fb-5cc3-45e2-bd27-fdde66d08bef"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "553c00fb-5cc3-45e2-bd27-fdde66d08bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9afcb6fe-2457-4215-8e13-7e37560e8e5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9afcb6fe-2457-4215-8e13-7e37560e8e5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9ca5c927-99c3-480f-b569-bc7f8029833d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9ca5c927-99c3-480f-b569-bc7f8029833d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:36.363 11:19:13 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 70784 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 70784 ']' 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 70784 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70784 00:15:36.363 killing process with pid 70784 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70784' 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 70784 00:15:36.363 11:19:13 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 70784 00:15:38.896 11:19:16 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:38.896 11:19:16 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:38.896 11:19:16 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:38.896 11:19:16 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:38.896 11:19:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.896 ************************************ 00:15:38.896 START TEST bdev_hello_world 00:15:38.896 ************************************ 00:15:38.896 11:19:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:38.896 [2024-11-15 11:19:16.152742] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:15:38.896 [2024-11-15 11:19:16.153478] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71160 ] 00:15:39.156 [2024-11-15 11:19:16.336055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.156 [2024-11-15 11:19:16.448588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.738 [2024-11-15 11:19:16.895030] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:39.738 [2024-11-15 11:19:16.895257] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:39.738 [2024-11-15 11:19:16.895287] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:39.738 [2024-11-15 11:19:16.897478] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:39.738 [2024-11-15 11:19:16.897819] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:39.738 [2024-11-15 11:19:16.897839] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:39.738 [2024-11-15 11:19:16.898125] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:39.738 00:15:39.738 [2024-11-15 11:19:16.898148] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:40.674 00:15:40.674 real 0m1.941s 00:15:40.674 user 0m1.568s 00:15:40.674 sys 0m0.255s 00:15:40.674 11:19:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.674 11:19:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:40.674 ************************************ 00:15:40.674 END TEST bdev_hello_world 00:15:40.674 ************************************ 00:15:40.674 11:19:18 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:40.674 11:19:18 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:40.674 11:19:18 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.674 11:19:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 ************************************ 00:15:40.933 START TEST bdev_bounds 00:15:40.933 ************************************ 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:15:40.933 Process bdevio pid: 71202 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71202 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71202' 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71202 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 71202 ']' 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:40.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:40.933 11:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 [2024-11-15 11:19:18.168406] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:15:40.933 [2024-11-15 11:19:18.168734] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71202 ] 00:15:41.192 [2024-11-15 11:19:18.349043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.192 [2024-11-15 11:19:18.468826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.192 [2024-11-15 11:19:18.468968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.192 [2024-11-15 11:19:18.468998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.782 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:41.782 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:15:41.782 11:19:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:41.782 I/O targets: 00:15:41.782 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:41.782 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:41.782 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.782 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.782 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.782 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:41.782 00:15:41.782 00:15:41.782 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.782 http://cunit.sourceforge.net/ 00:15:41.782 00:15:41.782 00:15:41.782 Suite: bdevio tests on: nvme3n1 00:15:41.782 Test: blockdev write read block ...passed 00:15:41.782 Test: blockdev write zeroes read block ...passed 00:15:41.782 Test: blockdev write zeroes read no split ...passed 00:15:41.782 Test: blockdev write zeroes read split ...passed 00:15:41.782 Test: blockdev write zeroes read split partial ...passed 00:15:41.782 Test: blockdev reset ...passed 00:15:41.782 Test: blockdev write read 8 blocks ...passed 00:15:41.782 Test: blockdev write read size > 128k ...passed 00:15:41.782 Test: blockdev write read invalid size ...passed 00:15:41.782 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:41.782 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:41.782 Test: blockdev write read max offset ...passed 00:15:41.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:41.782 Test: blockdev writev readv 8 blocks ...passed 00:15:41.782 Test: blockdev writev readv 30 x 1block ...passed 00:15:41.782 Test: blockdev writev readv block ...passed 00:15:41.782 Test: blockdev writev readv size > 128k ...passed 00:15:41.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:41.782 Test: blockdev comparev and writev ...passed 00:15:41.782 Test: blockdev nvme passthru rw ...passed 00:15:41.782 Test: blockdev nvme passthru vendor specific ...passed 00:15:41.782 Test: blockdev nvme admin passthru ...passed 00:15:41.782 Test: blockdev copy ...passed 00:15:41.782 Suite: bdevio tests on: nvme2n3 00:15:41.782 Test: blockdev write read block ...passed 00:15:41.782 Test: blockdev write zeroes read block ...passed 00:15:41.782 Test: blockdev write zeroes read no split ...passed 00:15:42.041 Test: blockdev write zeroes read split ...passed 00:15:42.041 Test: blockdev write zeroes read split partial ...passed 00:15:42.041 Test: blockdev reset ...passed 00:15:42.041 Test: blockdev write read 8 blocks ...passed 00:15:42.041 Test: blockdev write read size > 128k ...passed 00:15:42.041 Test: blockdev write read invalid size ...passed 00:15:42.041 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.041 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.042 Test: blockdev write read max offset ...passed 00:15:42.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.042 Test: blockdev writev readv 8 blocks ...passed 00:15:42.042 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.042 Test: blockdev writev readv block ...passed 00:15:42.042 Test: blockdev writev readv size > 128k ...passed 00:15:42.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.042 Test: blockdev comparev and writev ...passed 00:15:42.042 Test: blockdev nvme passthru rw ...passed 00:15:42.042 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.042 Test: blockdev nvme admin passthru ...passed 00:15:42.042 Test: blockdev copy ...passed 00:15:42.042 Suite: bdevio tests on: nvme2n2 00:15:42.042 Test: blockdev write read block ...passed 00:15:42.042 Test: blockdev write zeroes read block ...passed 00:15:42.042 Test: blockdev write zeroes read no split ...passed 00:15:42.042 Test: blockdev write zeroes read split ...passed 00:15:42.042 Test: blockdev write zeroes read split partial ...passed 00:15:42.042 Test: blockdev reset ...passed 00:15:42.042 Test: blockdev write read 8 blocks ...passed 00:15:42.042 Test: blockdev write read size > 128k ...passed 00:15:42.042 Test: blockdev write read invalid size ...passed 00:15:42.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.042 Test: blockdev write read max offset ...passed 00:15:42.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.042 Test: blockdev writev readv 8 blocks ...passed 00:15:42.042 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.042 Test: blockdev writev readv block ...passed 00:15:42.042 Test: blockdev writev readv size > 128k ...passed 00:15:42.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.042 Test: blockdev comparev and writev ...passed 00:15:42.042 Test: blockdev nvme passthru rw ...passed 00:15:42.042 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.042 Test: blockdev nvme admin passthru ...passed 00:15:42.042 Test: blockdev copy ...passed 00:15:42.042 Suite: bdevio tests on: nvme2n1 00:15:42.042 Test: blockdev write read block ...passed 00:15:42.042 Test: blockdev write zeroes read block ...passed 00:15:42.042 Test: blockdev write zeroes read no split ...passed 00:15:42.042 Test: blockdev write zeroes read split ...passed 00:15:42.042 Test: blockdev write zeroes read split partial ...passed 00:15:42.042 Test: blockdev reset ...passed 00:15:42.042 Test: blockdev write read 8 blocks ...passed 00:15:42.042 Test: blockdev write read size > 128k ...passed 00:15:42.042 Test: blockdev write read invalid size ...passed 00:15:42.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.042 Test: blockdev write read max offset ...passed 00:15:42.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.042 Test: blockdev writev readv 8 blocks ...passed 00:15:42.042 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.042 Test: blockdev writev readv block ...passed 00:15:42.042 Test: blockdev writev readv size > 128k ...passed 00:15:42.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.042 Test: blockdev comparev and writev ...passed 00:15:42.042 Test: blockdev nvme passthru rw ...passed 00:15:42.042 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.042 Test: blockdev nvme admin passthru ...passed 00:15:42.042 Test: blockdev copy ...passed 00:15:42.042 Suite: bdevio tests on: nvme1n1 00:15:42.042 Test: blockdev write read block ...passed 00:15:42.042 Test: blockdev write zeroes read block ...passed 00:15:42.042 Test: blockdev write zeroes read no split ...passed 00:15:42.042 Test: blockdev write zeroes read split ...passed 00:15:42.300 Test: blockdev write zeroes read split partial ...passed 00:15:42.300 Test: blockdev reset ...passed 00:15:42.300 Test: blockdev write read 8 blocks ...passed 00:15:42.300 Test: blockdev write read size > 128k ...passed 00:15:42.300 Test: blockdev write read invalid size ...passed 00:15:42.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.300 Test: blockdev write read max offset ...passed 00:15:42.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.300 Test: blockdev writev readv 8 blocks ...passed 00:15:42.300 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.300 Test: blockdev writev readv block ...passed 00:15:42.300 Test: blockdev writev readv size > 128k ...passed 00:15:42.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.300 Test: blockdev comparev and writev ...passed 00:15:42.300 Test: blockdev nvme passthru rw ...passed 00:15:42.300 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.300 Test: blockdev nvme admin passthru ...passed 00:15:42.300 Test: blockdev copy ...passed 00:15:42.300 Suite: bdevio tests on: nvme0n1 00:15:42.300 Test: blockdev write read block ...passed 00:15:42.300 Test: blockdev write zeroes read block ...passed 00:15:42.300 Test: blockdev write zeroes read no split ...passed 00:15:42.300 Test: blockdev write zeroes read split ...passed 00:15:42.300 Test: blockdev write zeroes read split partial ...passed 00:15:42.300 Test: blockdev reset ...passed 00:15:42.300 Test: blockdev write read 8 blocks ...passed 00:15:42.300 Test: blockdev write read size > 128k ...passed 00:15:42.300 Test: blockdev write read invalid size ...passed 00:15:42.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.300 Test: blockdev write read max offset ...passed 00:15:42.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.300 Test: blockdev writev readv 8 blocks ...passed 00:15:42.300 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.300 Test: blockdev writev readv block ...passed 00:15:42.300 Test: blockdev writev readv size > 128k ...passed 00:15:42.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.300 Test: blockdev comparev and writev ...passed 00:15:42.300 Test: blockdev nvme passthru rw ...passed 00:15:42.300 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.300 Test: blockdev nvme admin passthru ...passed 00:15:42.300 Test: blockdev copy ...passed 00:15:42.300 00:15:42.300 Run Summary: Type Total Ran Passed Failed Inactive 00:15:42.300 suites 6 6 n/a 0 0 00:15:42.300 tests 138 138 138 0 0 00:15:42.300 asserts 780 780 780 0 n/a 00:15:42.300 00:15:42.300 Elapsed time = 1.288 seconds 00:15:42.300 0 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71202 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 71202 ']' 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 71202 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71202 00:15:42.300 killing process with pid 71202 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71202' 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 71202 00:15:42.300 11:19:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 71202 00:15:43.675 ************************************ 00:15:43.675 END TEST bdev_bounds 00:15:43.675 ************************************ 00:15:43.675 11:19:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:43.675 00:15:43.675 real 0m2.680s 00:15:43.675 user 0m6.601s 00:15:43.675 sys 0m0.415s 00:15:43.675 11:19:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.675 11:19:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:43.675 11:19:20 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:43.675 11:19:20 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:43.675 11:19:20 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:43.675 11:19:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.675 ************************************ 00:15:43.675 START TEST bdev_nbd 00:15:43.675 ************************************ 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71257 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71257 /var/tmp/spdk-nbd.sock 00:15:43.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 71257 ']' 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.675 11:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:43.675 [2024-11-15 11:19:20.936957] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:15:43.675 [2024-11-15 11:19:20.937080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.933 [2024-11-15 11:19:21.118860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.933 [2024-11-15 11:19:21.235655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.500 11:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.759 1+0 records in 00:15:44.759 1+0 records out 00:15:44.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056852 s, 7.2 MB/s 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.759 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.018 1+0 records in 00:15:45.018 1+0 records out 00:15:45.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608454 s, 6.7 MB/s 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.018 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.277 1+0 records in 00:15:45.277 1+0 records out 00:15:45.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636223 s, 6.4 MB/s 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.277 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.536 1+0 records in 00:15:45.536 1+0 records out 00:15:45.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735552 s, 5.6 MB/s 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.536 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.537 11:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.796 1+0 records in 00:15:45.796 1+0 records out 00:15:45.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735784 s, 5.6 MB/s 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.796 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.055 1+0 records in 00:15:46.055 1+0 records out 00:15:46.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102475 s, 4.0 MB/s 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:46.055 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd0", 00:15:46.314 "bdev_name": "nvme0n1" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd1", 00:15:46.314 "bdev_name": "nvme1n1" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd2", 00:15:46.314 "bdev_name": "nvme2n1" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd3", 00:15:46.314 "bdev_name": "nvme2n2" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd4", 00:15:46.314 "bdev_name": "nvme2n3" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd5", 00:15:46.314 "bdev_name": "nvme3n1" 00:15:46.314 } 00:15:46.314 ]' 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd0", 00:15:46.314 "bdev_name": "nvme0n1" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd1", 00:15:46.314 "bdev_name": "nvme1n1" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd2", 00:15:46.314 "bdev_name": "nvme2n1" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd3", 00:15:46.314 "bdev_name": "nvme2n2" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd4", 00:15:46.314 "bdev_name": "nvme2n3" 00:15:46.314 }, 00:15:46.314 { 00:15:46.314 "nbd_device": "/dev/nbd5", 00:15:46.314 "bdev_name": "nvme3n1" 00:15:46.314 } 00:15:46.314 ]' 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.314 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.572 11:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.831 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.090 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.349 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.607 11:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.866 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:48.125 /dev/nbd0 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.125 1+0 records in 00:15:48.125 1+0 records out 00:15:48.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593309 s, 6.9 MB/s 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.125 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:48.386 /dev/nbd1 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:48.386 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.387 1+0 records in 00:15:48.387 1+0 records out 00:15:48.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815042 s, 5.0 MB/s 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.387 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:48.648 /dev/nbd10 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.648 1+0 records in 00:15:48.648 1+0 records out 00:15:48.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673283 s, 6.1 MB/s 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.648 11:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:48.912 /dev/nbd11 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.912 1+0 records in 00:15:48.912 1+0 records out 00:15:48.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648339 s, 6.3 MB/s 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.912 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:49.180 /dev/nbd12 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.180 1+0 records in 00:15:49.180 1+0 records out 00:15:49.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683788 s, 6.0 MB/s 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:49.180 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:49.439 /dev/nbd13 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.439 1+0 records in 00:15:49.439 1+0 records out 00:15:49.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617492 s, 6.6 MB/s 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.439 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd0", 00:15:49.698 "bdev_name": "nvme0n1" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd1", 00:15:49.698 "bdev_name": "nvme1n1" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd10", 00:15:49.698 "bdev_name": "nvme2n1" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd11", 00:15:49.698 "bdev_name": "nvme2n2" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd12", 00:15:49.698 "bdev_name": "nvme2n3" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd13", 00:15:49.698 "bdev_name": "nvme3n1" 00:15:49.698 } 00:15:49.698 ]' 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd0", 00:15:49.698 "bdev_name": "nvme0n1" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd1", 00:15:49.698 "bdev_name": "nvme1n1" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd10", 00:15:49.698 "bdev_name": "nvme2n1" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd11", 00:15:49.698 "bdev_name": "nvme2n2" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd12", 00:15:49.698 "bdev_name": "nvme2n3" 00:15:49.698 }, 00:15:49.698 { 00:15:49.698 "nbd_device": "/dev/nbd13", 00:15:49.698 "bdev_name": "nvme3n1" 00:15:49.698 } 00:15:49.698 ]' 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:49.698 /dev/nbd1 00:15:49.698 /dev/nbd10 00:15:49.698 /dev/nbd11 00:15:49.698 /dev/nbd12 00:15:49.698 /dev/nbd13' 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:49.698 /dev/nbd1 00:15:49.698 /dev/nbd10 00:15:49.698 /dev/nbd11 00:15:49.698 /dev/nbd12 00:15:49.698 /dev/nbd13' 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:49.698 11:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:49.698 256+0 records in 00:15:49.698 256+0 records out 00:15:49.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119523 s, 87.7 MB/s 00:15:49.698 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.698 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:49.956 256+0 records in 00:15:49.956 256+0 records out 00:15:49.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120429 s, 8.7 MB/s 00:15:49.956 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.956 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:49.956 256+0 records in 00:15:49.956 256+0 records out 00:15:49.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147004 s, 7.1 MB/s 00:15:49.956 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.957 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:50.215 256+0 records in 00:15:50.215 256+0 records out 00:15:50.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124341 s, 8.4 MB/s 00:15:50.215 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:50.215 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:50.215 256+0 records in 00:15:50.215 256+0 records out 00:15:50.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12028 s, 8.7 MB/s 00:15:50.215 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:50.215 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:50.473 256+0 records in 00:15:50.473 256+0 records out 00:15:50.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121163 s, 8.7 MB/s 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:50.473 256+0 records in 00:15:50.473 256+0 records out 00:15:50.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125692 s, 8.3 MB/s 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.473 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.474 11:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:50.732 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:50.732 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:50.732 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:50.732 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.732 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.732 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:50.733 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.733 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.733 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.733 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.992 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.251 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:51.509 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:51.509 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.510 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.768 11:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.024 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:52.282 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:52.283 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.283 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:52.283 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:52.542 malloc_lvol_verify 00:15:52.542 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:52.542 8b23f268-6d75-405c-b0aa-25541e9aa49d 00:15:52.542 11:19:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:52.801 e01957df-328f-4e21-927a-876eac8ee8d0 00:15:52.801 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:53.059 /dev/nbd0 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:53.059 mke2fs 1.47.0 (5-Feb-2023) 00:15:53.059 Discarding device blocks: 0/4096 done 00:15:53.059 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:53.059 00:15:53.059 Allocating group tables: 0/1 done 00:15:53.059 Writing inode tables: 0/1 done 00:15:53.059 Creating journal (1024 blocks): done 00:15:53.059 Writing superblocks and filesystem accounting information: 0/1 done 00:15:53.059 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.059 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71257 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 71257 ']' 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 71257 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71257 00:15:53.317 killing process with pid 71257 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71257' 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 71257 00:15:53.317 11:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 71257 00:15:54.695 ************************************ 00:15:54.695 END TEST bdev_nbd 00:15:54.695 ************************************ 00:15:54.695 11:19:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:54.695 00:15:54.695 real 0m10.953s 00:15:54.695 user 0m14.140s 00:15:54.695 sys 0m4.697s 00:15:54.695 11:19:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:54.695 11:19:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 11:19:31 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:54.695 11:19:31 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:54.695 11:19:31 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:54.695 11:19:31 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:54.695 11:19:31 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:54.695 11:19:31 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:54.695 11:19:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 ************************************ 00:15:54.695 START TEST bdev_fio 00:15:54.695 ************************************ 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:54.695 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:15:54.695 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:54.696 ************************************ 00:15:54.696 START TEST bdev_fio_rw_verify 00:15:54.696 ************************************ 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:15:54.696 11:19:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:54.696 11:19:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:54.696 11:19:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:54.696 11:19:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:15:54.696 11:19:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:54.696 11:19:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:54.954 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.954 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.954 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.954 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.954 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.955 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:54.955 fio-3.35 00:15:54.955 Starting 6 threads 00:16:07.187 00:16:07.187 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71671: Fri Nov 15 11:19:43 2024 00:16:07.187 read: IOPS=33.4k, BW=131MiB/s (137MB/s)(1306MiB/10001msec) 00:16:07.187 slat (usec): min=2, max=578, avg= 6.30, stdev= 3.24 00:16:07.187 clat (usec): min=59, max=4058, avg=585.53, stdev=149.01 00:16:07.187 lat (usec): min=66, max=4065, avg=591.83, stdev=149.72 00:16:07.187 clat percentiles (usec): 00:16:07.187 | 50.000th=[ 619], 99.000th=[ 914], 99.900th=[ 1352], 99.990th=[ 3195], 00:16:07.187 | 99.999th=[ 3916] 00:16:07.187 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(1316MiB/10001msec); 0 zone resets 00:16:07.187 slat (usec): min=11, max=2759, avg=18.87, stdev=16.56 00:16:07.187 clat (usec): min=82, max=3381, avg=647.31, stdev=154.92 00:16:07.187 lat (usec): min=97, max=3432, avg=666.18, stdev=156.24 00:16:07.187 clat percentiles (usec): 00:16:07.187 | 50.000th=[ 660], 99.000th=[ 1123], 99.900th=[ 1844], 99.990th=[ 2507], 00:16:07.187 | 99.999th=[ 2999] 00:16:07.187 bw ( KiB/s): min=114600, max=152456, per=100.00%, avg=135296.21, stdev=1844.78, samples=114 00:16:07.187 iops : min=28650, max=38114, avg=33823.89, stdev=461.20, samples=114 00:16:07.187 lat (usec) : 100=0.01%, 250=2.84%, 500=13.10%, 750=74.66%, 1000=8.22% 00:16:07.187 lat (msec) : 2=1.13%, 4=0.05%, 10=0.01% 00:16:07.187 cpu : usr=62.76%, sys=27.16%, ctx=7801, majf=0, minf=27686 00:16:07.187 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.187 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.187 issued rwts: total=334257,337009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:07.187 00:16:07.187 Run status group 0 (all jobs): 00:16:07.187 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=1306MiB (1369MB), run=10001-10001msec 00:16:07.187 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=1316MiB (1380MB), run=10001-10001msec 00:16:07.187 ----------------------------------------------------- 00:16:07.187 Suppressions used: 00:16:07.187 count bytes template 00:16:07.187 6 48 /usr/src/fio/parse.c 00:16:07.187 2509 240864 /usr/src/fio/iolog.c 00:16:07.187 1 8 libtcmalloc_minimal.so 00:16:07.187 1 904 libcrypto.so 00:16:07.187 ----------------------------------------------------- 00:16:07.187 00:16:07.187 00:16:07.187 real 0m12.507s 00:16:07.187 user 0m39.649s 00:16:07.187 sys 0m16.729s 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:07.187 ************************************ 00:16:07.187 END TEST bdev_fio_rw_verify 00:16:07.187 ************************************ 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:07.187 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3fa09a80-fceb-4781-accc-db24a755e6d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3fa09a80-fceb-4781-accc-db24a755e6d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0edcaa81-642f-4602-b369-e14e5960a42b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0edcaa81-642f-4602-b369-e14e5960a42b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cd55ff7b-d6de-40e3-8cab-0988afc53073"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd55ff7b-d6de-40e3-8cab-0988afc53073",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "553c00fb-5cc3-45e2-bd27-fdde66d08bef"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "553c00fb-5cc3-45e2-bd27-fdde66d08bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9afcb6fe-2457-4215-8e13-7e37560e8e5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9afcb6fe-2457-4215-8e13-7e37560e8e5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9ca5c927-99c3-480f-b569-bc7f8029833d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9ca5c927-99c3-480f-b569-bc7f8029833d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:07.453 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:07.453 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.453 /home/vagrant/spdk_repo/spdk 00:16:07.453 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:07.453 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:07.453 11:19:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:07.453 00:16:07.453 real 0m12.744s 00:16:07.453 user 0m39.757s 00:16:07.453 sys 0m16.859s 00:16:07.453 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:07.453 11:19:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 ************************************ 00:16:07.453 END TEST bdev_fio 00:16:07.453 ************************************ 00:16:07.453 11:19:44 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:07.453 11:19:44 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:07.453 11:19:44 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:07.453 11:19:44 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:07.453 11:19:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:07.453 ************************************ 00:16:07.453 START TEST bdev_verify 00:16:07.453 ************************************ 00:16:07.453 11:19:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:07.453 [2024-11-15 11:19:44.761584] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:07.453 [2024-11-15 11:19:44.761700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71842 ] 00:16:07.712 [2024-11-15 11:19:44.942778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.712 [2024-11-15 11:19:45.060354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.712 [2024-11-15 11:19:45.060396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.280 Running I/O for 5 seconds... 00:16:10.592 25408.00 IOPS, 99.25 MiB/s [2024-11-15T11:19:48.928Z] 24304.00 IOPS, 94.94 MiB/s [2024-11-15T11:19:49.861Z] 24128.00 IOPS, 94.25 MiB/s [2024-11-15T11:19:50.797Z] 24104.00 IOPS, 94.16 MiB/s [2024-11-15T11:19:50.797Z] 24364.80 IOPS, 95.18 MiB/s 00:16:13.396 Latency(us) 00:16:13.396 [2024-11-15T11:19:50.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.396 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x0 length 0xa0000 00:16:13.396 nvme0n1 : 5.03 1831.87 7.16 0.00 0.00 69756.95 15265.41 62325.00 00:16:13.396 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0xa0000 length 0xa0000 00:16:13.396 nvme0n1 : 5.04 1853.40 7.24 0.00 0.00 68946.97 11896.49 57692.74 00:16:13.396 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x0 length 0xbd0bd 00:16:13.396 nvme1n1 : 5.03 2791.75 10.91 0.00 0.00 45606.35 5685.05 54323.82 00:16:13.396 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:13.396 nvme1n1 : 5.06 2780.25 10.86 0.00 0.00 45749.76 5316.58 56429.39 00:16:13.396 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x0 length 0x80000 00:16:13.396 nvme2n1 : 5.03 1832.73 7.16 0.00 0.00 69480.88 14212.63 55166.05 00:16:13.396 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x80000 length 0x80000 00:16:13.396 nvme2n1 : 5.07 1869.90 7.30 0.00 0.00 68096.00 7632.71 67378.38 00:16:13.396 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x0 length 0x80000 00:16:13.396 nvme2n2 : 5.06 1847.04 7.21 0.00 0.00 68764.59 10580.51 61903.88 00:16:13.396 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x80000 length 0x80000 00:16:13.396 nvme2n2 : 5.05 1851.38 7.23 0.00 0.00 68575.77 15475.97 61061.65 00:16:13.396 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x0 length 0x80000 00:16:13.396 nvme2n3 : 5.06 1847.74 7.22 0.00 0.00 68616.01 8685.49 64851.69 00:16:13.396 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x80000 length 0x80000 00:16:13.396 nvme2n3 : 5.07 1866.81 7.29 0.00 0.00 67898.02 4316.43 62746.11 00:16:13.396 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x0 length 0x20000 00:16:13.396 nvme3n1 : 5.06 1871.61 7.31 0.00 0.00 67630.96 2474.05 64851.69 00:16:13.396 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:13.396 Verification LBA range: start 0x20000 length 0x20000 00:16:13.396 nvme3n1 : 5.08 1866.38 7.29 0.00 0.00 67811.39 4869.14 65272.80 00:16:13.396 [2024-11-15T11:19:50.797Z] =================================================================================================================== 00:16:13.396 [2024-11-15T11:19:50.797Z] Total : 24110.86 94.18 0.00 0.00 63269.90 2474.05 67378.38 00:16:14.771 00:16:14.771 real 0m7.135s 00:16:14.771 user 0m10.905s 00:16:14.771 sys 0m2.080s 00:16:14.771 11:19:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.771 11:19:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:14.771 ************************************ 00:16:14.771 END TEST bdev_verify 00:16:14.771 ************************************ 00:16:14.771 11:19:51 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:14.771 11:19:51 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:14.771 11:19:51 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.771 11:19:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.771 ************************************ 00:16:14.771 START TEST bdev_verify_big_io 00:16:14.771 ************************************ 00:16:14.771 11:19:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:14.771 [2024-11-15 11:19:51.972465] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:14.771 [2024-11-15 11:19:51.972597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71947 ] 00:16:14.771 [2024-11-15 11:19:52.153633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.031 [2024-11-15 11:19:52.264229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.031 [2024-11-15 11:19:52.264261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.597 Running I/O for 5 seconds... 00:16:20.745 1504.00 IOPS, 94.00 MiB/s [2024-11-15T11:19:58.713Z] 3145.00 IOPS, 196.56 MiB/s [2024-11-15T11:19:58.713Z] 3803.67 IOPS, 237.73 MiB/s 00:16:21.312 Latency(us) 00:16:21.312 [2024-11-15T11:19:58.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.312 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.312 Verification LBA range: start 0x0 length 0xa000 00:16:21.312 nvme0n1 : 5.74 211.72 13.23 0.00 0.00 593996.53 65693.92 889394.58 00:16:21.312 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.312 Verification LBA range: start 0xa000 length 0xa000 00:16:21.312 nvme0n1 : 5.76 177.65 11.10 0.00 0.00 709256.07 22529.64 791695.94 00:16:21.312 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.312 Verification LBA range: start 0x0 length 0xbd0b 00:16:21.312 nvme1n1 : 5.75 136.45 8.53 0.00 0.00 887832.93 8264.38 2129156.73 00:16:21.312 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.312 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:21.312 nvme1n1 : 5.77 163.61 10.23 0.00 0.00 743307.43 54744.93 1253237.82 00:16:21.312 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.312 Verification LBA range: start 0x0 length 0x8000 00:16:21.312 nvme2n1 : 5.68 180.17 11.26 0.00 0.00 658347.03 17792.10 697366.21 00:16:21.312 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.312 Verification LBA range: start 0x8000 length 0x8000 00:16:21.312 nvme2n1 : 5.78 174.97 10.94 0.00 0.00 673730.35 40216.47 862443.23 00:16:21.313 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.313 Verification LBA range: start 0x0 length 0x8000 00:16:21.313 nvme2n2 : 5.76 201.47 12.59 0.00 0.00 581298.60 66536.15 616512.15 00:16:21.313 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.313 Verification LBA range: start 0x8000 length 0x8000 00:16:21.313 nvme2n2 : 5.78 163.24 10.20 0.00 0.00 715181.73 36636.99 1118481.07 00:16:21.313 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.313 Verification LBA range: start 0x0 length 0x8000 00:16:21.313 nvme2n3 : 5.76 142.71 8.92 0.00 0.00 801203.81 63588.34 1563178.36 00:16:21.313 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.313 Verification LBA range: start 0x8000 length 0x8000 00:16:21.313 nvme2n3 : 5.78 152.85 9.55 0.00 0.00 745221.76 20002.96 1441897.28 00:16:21.313 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.313 Verification LBA range: start 0x0 length 0x2000 00:16:21.313 nvme3n1 : 5.76 183.21 11.45 0.00 0.00 611667.69 8474.94 916345.93 00:16:21.313 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.313 Verification LBA range: start 0x2000 length 0x2000 00:16:21.313 nvme3n1 : 5.79 171.42 10.71 0.00 0.00 647475.24 2197.69 822016.21 00:16:21.313 [2024-11-15T11:19:58.714Z] =================================================================================================================== 00:16:21.313 [2024-11-15T11:19:58.714Z] Total : 2059.46 128.72 0.00 0.00 687680.00 2197.69 2129156.73 00:16:22.686 00:16:22.686 real 0m8.201s 00:16:22.686 user 0m14.799s 00:16:22.686 sys 0m0.641s 00:16:22.686 11:20:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:22.686 11:20:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.686 ************************************ 00:16:22.686 END TEST bdev_verify_big_io 00:16:22.686 ************************************ 00:16:22.944 11:20:00 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:22.944 11:20:00 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:22.944 11:20:00 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:22.944 11:20:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.944 ************************************ 00:16:22.944 START TEST bdev_write_zeroes 00:16:22.944 ************************************ 00:16:22.944 11:20:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:22.944 [2024-11-15 11:20:00.247611] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:22.944 [2024-11-15 11:20:00.247768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72058 ] 00:16:23.203 [2024-11-15 11:20:00.428751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.203 [2024-11-15 11:20:00.537853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.770 Running I/O for 1 seconds... 00:16:24.707 67264.00 IOPS, 262.75 MiB/s 00:16:24.707 Latency(us) 00:16:24.707 [2024-11-15T11:20:02.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.707 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:24.707 nvme0n1 : 1.02 10931.84 42.70 0.00 0.00 11698.01 7737.99 26951.35 00:16:24.707 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:24.707 nvme1n1 : 1.02 12806.08 50.02 0.00 0.00 9957.54 6185.12 21161.02 00:16:24.707 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:24.707 nvme2n1 : 1.02 10913.23 42.63 0.00 0.00 11630.22 7737.99 25898.56 00:16:24.707 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:24.707 nvme2n2 : 1.02 10903.63 42.59 0.00 0.00 11634.72 7632.71 26424.96 00:16:24.707 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:24.707 nvme2n3 : 1.02 10894.08 42.55 0.00 0.00 11638.23 7580.07 26530.24 00:16:24.707 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:24.707 nvme3n1 : 1.02 10884.85 42.52 0.00 0.00 11640.59 7580.07 26740.79 00:16:24.707 [2024-11-15T11:20:02.108Z] =================================================================================================================== 00:16:24.707 [2024-11-15T11:20:02.108Z] Total : 67333.70 263.02 0.00 0.00 11327.19 6185.12 26951.35 00:16:26.086 00:16:26.086 real 0m3.009s 00:16:26.086 user 0m2.175s 00:16:26.086 sys 0m0.658s 00:16:26.086 11:20:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:26.086 11:20:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:26.086 ************************************ 00:16:26.086 END TEST bdev_write_zeroes 00:16:26.086 ************************************ 00:16:26.086 11:20:03 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:26.086 11:20:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:26.086 11:20:03 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:26.086 11:20:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.086 ************************************ 00:16:26.086 START TEST bdev_json_nonenclosed 00:16:26.086 ************************************ 00:16:26.086 11:20:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:26.086 [2024-11-15 11:20:03.336599] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:26.086 [2024-11-15 11:20:03.336723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72117 ] 00:16:26.344 [2024-11-15 11:20:03.516199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.344 [2024-11-15 11:20:03.637390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.344 [2024-11-15 11:20:03.637714] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:26.344 [2024-11-15 11:20:03.637747] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:26.344 [2024-11-15 11:20:03.637760] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:26.603 00:16:26.603 real 0m0.655s 00:16:26.603 user 0m0.408s 00:16:26.603 sys 0m0.143s 00:16:26.603 11:20:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:26.603 ************************************ 00:16:26.603 END TEST bdev_json_nonenclosed 00:16:26.603 ************************************ 00:16:26.603 11:20:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:26.603 11:20:03 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:26.603 11:20:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:26.603 11:20:03 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:26.603 11:20:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.603 ************************************ 00:16:26.603 START TEST bdev_json_nonarray 00:16:26.603 ************************************ 00:16:26.603 11:20:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:26.861 [2024-11-15 11:20:04.064442] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:26.861 [2024-11-15 11:20:04.064612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72147 ] 00:16:26.861 [2024-11-15 11:20:04.251925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.120 [2024-11-15 11:20:04.367136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.120 [2024-11-15 11:20:04.367245] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:27.120 [2024-11-15 11:20:04.367268] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:27.120 [2024-11-15 11:20:04.367281] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:27.380 00:16:27.380 real 0m0.665s 00:16:27.381 user 0m0.401s 00:16:27.381 sys 0m0.159s 00:16:27.381 ************************************ 00:16:27.381 END TEST bdev_json_nonarray 00:16:27.381 ************************************ 00:16:27.381 11:20:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:27.381 11:20:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:27.381 11:20:04 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:28.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:50.267 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:50.267 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:50.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:50.267 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:50.267 00:16:50.267 real 1m18.733s 00:16:50.267 user 1m42.629s 00:16:50.267 sys 1m26.707s 00:16:50.267 11:20:24 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:50.267 11:20:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.267 ************************************ 00:16:50.267 END TEST blockdev_xnvme 00:16:50.267 ************************************ 00:16:50.267 11:20:24 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:50.267 11:20:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:50.267 11:20:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:50.267 11:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.267 ************************************ 00:16:50.267 START TEST ublk 00:16:50.267 ************************************ 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:50.267 * Looking for test storage... 00:16:50.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.267 11:20:24 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.267 11:20:24 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.267 11:20:24 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.267 11:20:24 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.267 11:20:24 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.267 11:20:24 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.267 11:20:24 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.267 11:20:24 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:50.267 11:20:24 ublk -- scripts/common.sh@345 -- # : 1 00:16:50.267 11:20:24 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.267 11:20:24 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.267 11:20:24 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:50.267 11:20:24 ublk -- scripts/common.sh@353 -- # local d=1 00:16:50.267 11:20:24 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.267 11:20:24 ublk -- scripts/common.sh@355 -- # echo 1 00:16:50.267 11:20:24 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.267 11:20:24 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@353 -- # local d=2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.267 11:20:24 ublk -- scripts/common.sh@355 -- # echo 2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.267 11:20:24 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.267 11:20:24 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.267 11:20:24 ublk -- scripts/common.sh@368 -- # return 0 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:50.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.267 --rc genhtml_branch_coverage=1 00:16:50.267 --rc genhtml_function_coverage=1 00:16:50.267 --rc genhtml_legend=1 00:16:50.267 --rc geninfo_all_blocks=1 00:16:50.267 --rc geninfo_unexecuted_blocks=1 00:16:50.267 00:16:50.267 ' 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:50.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.267 --rc genhtml_branch_coverage=1 00:16:50.267 --rc genhtml_function_coverage=1 00:16:50.267 --rc genhtml_legend=1 00:16:50.267 --rc geninfo_all_blocks=1 00:16:50.267 --rc geninfo_unexecuted_blocks=1 00:16:50.267 00:16:50.267 ' 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:50.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.267 --rc genhtml_branch_coverage=1 00:16:50.267 --rc genhtml_function_coverage=1 00:16:50.267 --rc genhtml_legend=1 00:16:50.267 --rc geninfo_all_blocks=1 00:16:50.267 --rc geninfo_unexecuted_blocks=1 00:16:50.267 00:16:50.267 ' 00:16:50.267 11:20:24 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:50.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.268 --rc genhtml_branch_coverage=1 00:16:50.268 --rc genhtml_function_coverage=1 00:16:50.268 --rc genhtml_legend=1 00:16:50.268 --rc geninfo_all_blocks=1 00:16:50.268 --rc geninfo_unexecuted_blocks=1 00:16:50.268 00:16:50.268 ' 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:50.268 11:20:24 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:50.268 11:20:24 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:50.268 11:20:24 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:50.268 11:20:24 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:50.268 11:20:24 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:50.268 11:20:24 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:50.268 11:20:24 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:50.268 11:20:24 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:50.268 11:20:24 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:50.268 11:20:24 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:50.268 11:20:24 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:50.268 11:20:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.268 ************************************ 00:16:50.268 START TEST test_save_ublk_config 00:16:50.268 ************************************ 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72455 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:50.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72455 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72455 ']' 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:50.268 11:20:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:50.268 [2024-11-15 11:20:24.705519] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:50.268 [2024-11-15 11:20:24.706308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72455 ] 00:16:50.268 [2024-11-15 11:20:24.899434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.268 [2024-11-15 11:20:25.009976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.268 11:20:25 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:50.268 11:20:25 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:50.268 11:20:25 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:50.268 11:20:25 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:50.268 11:20:25 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.268 11:20:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:50.268 [2024-11-15 11:20:25.888580] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:50.268 [2024-11-15 11:20:25.889651] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:50.268 malloc0 00:16:50.268 [2024-11-15 11:20:25.973710] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:50.268 [2024-11-15 11:20:25.973805] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:50.268 [2024-11-15 11:20:25.973819] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:50.268 [2024-11-15 11:20:25.973827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:50.268 [2024-11-15 11:20:25.981613] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:50.268 [2024-11-15 11:20:25.981639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:50.268 [2024-11-15 11:20:25.989597] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:50.268 [2024-11-15 11:20:25.989705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:50.268 [2024-11-15 11:20:26.006588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:50.268 0 00:16:50.268 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.268 11:20:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:50.268 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.268 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:50.268 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.268 11:20:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:50.268 "subsystems": [ 00:16:50.268 { 00:16:50.268 "subsystem": "fsdev", 00:16:50.268 "config": [ 00:16:50.268 { 00:16:50.268 "method": "fsdev_set_opts", 00:16:50.268 "params": { 00:16:50.268 "fsdev_io_pool_size": 65535, 00:16:50.268 "fsdev_io_cache_size": 256 00:16:50.268 } 00:16:50.268 } 00:16:50.268 ] 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "subsystem": "keyring", 00:16:50.268 "config": [] 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "subsystem": "iobuf", 00:16:50.268 "config": [ 00:16:50.268 { 00:16:50.268 "method": "iobuf_set_options", 00:16:50.268 "params": { 00:16:50.268 "small_pool_count": 8192, 00:16:50.268 "large_pool_count": 1024, 00:16:50.268 "small_bufsize": 8192, 00:16:50.268 "large_bufsize": 135168, 00:16:50.268 "enable_numa": false 00:16:50.268 } 00:16:50.268 } 00:16:50.268 ] 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "subsystem": "sock", 00:16:50.268 "config": [ 00:16:50.268 { 00:16:50.268 "method": "sock_set_default_impl", 00:16:50.268 "params": { 00:16:50.268 "impl_name": "posix" 00:16:50.268 } 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "method": "sock_impl_set_options", 00:16:50.268 "params": { 00:16:50.268 "impl_name": "ssl", 00:16:50.268 "recv_buf_size": 4096, 00:16:50.268 "send_buf_size": 4096, 00:16:50.268 "enable_recv_pipe": true, 00:16:50.268 "enable_quickack": false, 00:16:50.268 "enable_placement_id": 0, 00:16:50.268 "enable_zerocopy_send_server": true, 00:16:50.268 "enable_zerocopy_send_client": false, 00:16:50.268 "zerocopy_threshold": 0, 00:16:50.268 "tls_version": 0, 00:16:50.268 "enable_ktls": false 00:16:50.268 } 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "method": "sock_impl_set_options", 00:16:50.268 "params": { 00:16:50.268 "impl_name": "posix", 00:16:50.268 "recv_buf_size": 2097152, 00:16:50.268 "send_buf_size": 2097152, 00:16:50.268 "enable_recv_pipe": true, 00:16:50.268 "enable_quickack": false, 00:16:50.268 "enable_placement_id": 0, 00:16:50.268 "enable_zerocopy_send_server": true, 00:16:50.268 "enable_zerocopy_send_client": false, 00:16:50.268 "zerocopy_threshold": 0, 00:16:50.268 "tls_version": 0, 00:16:50.268 "enable_ktls": false 00:16:50.268 } 00:16:50.268 } 00:16:50.268 ] 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "subsystem": "vmd", 00:16:50.268 "config": [] 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "subsystem": "accel", 00:16:50.268 "config": [ 00:16:50.268 { 00:16:50.268 "method": "accel_set_options", 00:16:50.268 "params": { 00:16:50.268 "small_cache_size": 128, 00:16:50.268 "large_cache_size": 16, 00:16:50.268 "task_count": 2048, 00:16:50.268 "sequence_count": 2048, 00:16:50.268 "buf_count": 2048 00:16:50.268 } 00:16:50.268 } 00:16:50.268 ] 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "subsystem": "bdev", 00:16:50.268 "config": [ 00:16:50.268 { 00:16:50.268 "method": "bdev_set_options", 00:16:50.268 "params": { 00:16:50.268 "bdev_io_pool_size": 65535, 00:16:50.268 "bdev_io_cache_size": 256, 00:16:50.268 "bdev_auto_examine": true, 00:16:50.268 "iobuf_small_cache_size": 128, 00:16:50.268 "iobuf_large_cache_size": 16 00:16:50.268 } 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "method": "bdev_raid_set_options", 00:16:50.268 "params": { 00:16:50.268 "process_window_size_kb": 1024, 00:16:50.268 "process_max_bandwidth_mb_sec": 0 00:16:50.268 } 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "method": "bdev_iscsi_set_options", 00:16:50.268 "params": { 00:16:50.268 "timeout_sec": 30 00:16:50.268 } 00:16:50.268 }, 00:16:50.268 { 00:16:50.268 "method": "bdev_nvme_set_options", 00:16:50.268 "params": { 00:16:50.268 "action_on_timeout": "none", 00:16:50.268 "timeout_us": 0, 00:16:50.268 "timeout_admin_us": 0, 00:16:50.268 "keep_alive_timeout_ms": 10000, 00:16:50.268 "arbitration_burst": 0, 00:16:50.268 "low_priority_weight": 0, 00:16:50.268 "medium_priority_weight": 0, 00:16:50.268 "high_priority_weight": 0, 00:16:50.268 "nvme_adminq_poll_period_us": 10000, 00:16:50.268 "nvme_ioq_poll_period_us": 0, 00:16:50.269 "io_queue_requests": 0, 00:16:50.269 "delay_cmd_submit": true, 00:16:50.269 "transport_retry_count": 4, 00:16:50.269 "bdev_retry_count": 3, 00:16:50.269 "transport_ack_timeout": 0, 00:16:50.269 "ctrlr_loss_timeout_sec": 0, 00:16:50.269 "reconnect_delay_sec": 0, 00:16:50.269 "fast_io_fail_timeout_sec": 0, 00:16:50.269 "disable_auto_failback": false, 00:16:50.269 "generate_uuids": false, 00:16:50.269 "transport_tos": 0, 00:16:50.269 "nvme_error_stat": false, 00:16:50.269 "rdma_srq_size": 0, 00:16:50.269 "io_path_stat": false, 00:16:50.269 "allow_accel_sequence": false, 00:16:50.269 "rdma_max_cq_size": 0, 00:16:50.269 "rdma_cm_event_timeout_ms": 0, 00:16:50.269 "dhchap_digests": [ 00:16:50.269 "sha256", 00:16:50.269 "sha384", 00:16:50.269 "sha512" 00:16:50.269 ], 00:16:50.269 "dhchap_dhgroups": [ 00:16:50.269 "null", 00:16:50.269 "ffdhe2048", 00:16:50.269 "ffdhe3072", 00:16:50.269 "ffdhe4096", 00:16:50.269 "ffdhe6144", 00:16:50.269 "ffdhe8192" 00:16:50.269 ] 00:16:50.269 } 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "method": "bdev_nvme_set_hotplug", 00:16:50.269 "params": { 00:16:50.269 "period_us": 100000, 00:16:50.269 "enable": false 00:16:50.269 } 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "method": "bdev_malloc_create", 00:16:50.269 "params": { 00:16:50.269 "name": "malloc0", 00:16:50.269 "num_blocks": 8192, 00:16:50.269 "block_size": 4096, 00:16:50.269 "physical_block_size": 4096, 00:16:50.269 "uuid": "48c66f61-30fa-464a-a750-ced9cf02f137", 00:16:50.269 "optimal_io_boundary": 0, 00:16:50.269 "md_size": 0, 00:16:50.269 "dif_type": 0, 00:16:50.269 "dif_is_head_of_md": false, 00:16:50.269 "dif_pi_format": 0 00:16:50.269 } 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "method": "bdev_wait_for_examine" 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "scsi", 00:16:50.269 "config": null 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "scheduler", 00:16:50.269 "config": [ 00:16:50.269 { 00:16:50.269 "method": "framework_set_scheduler", 00:16:50.269 "params": { 00:16:50.269 "name": "static" 00:16:50.269 } 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "vhost_scsi", 00:16:50.269 "config": [] 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "vhost_blk", 00:16:50.269 "config": [] 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "ublk", 00:16:50.269 "config": [ 00:16:50.269 { 00:16:50.269 "method": "ublk_create_target", 00:16:50.269 "params": { 00:16:50.269 "cpumask": "1" 00:16:50.269 } 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "method": "ublk_start_disk", 00:16:50.269 "params": { 00:16:50.269 "bdev_name": "malloc0", 00:16:50.269 "ublk_id": 0, 00:16:50.269 "num_queues": 1, 00:16:50.269 "queue_depth": 128 00:16:50.269 } 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "nbd", 00:16:50.269 "config": [] 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "nvmf", 00:16:50.269 "config": [ 00:16:50.269 { 00:16:50.269 "method": "nvmf_set_config", 00:16:50.269 "params": { 00:16:50.269 "discovery_filter": "match_any", 00:16:50.269 "admin_cmd_passthru": { 00:16:50.269 "identify_ctrlr": false 00:16:50.269 }, 00:16:50.269 "dhchap_digests": [ 00:16:50.269 "sha256", 00:16:50.269 "sha384", 00:16:50.269 "sha512" 00:16:50.269 ], 00:16:50.269 "dhchap_dhgroups": [ 00:16:50.269 "null", 00:16:50.269 "ffdhe2048", 00:16:50.269 "ffdhe3072", 00:16:50.269 "ffdhe4096", 00:16:50.269 "ffdhe6144", 00:16:50.269 "ffdhe8192" 00:16:50.269 ] 00:16:50.269 } 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "method": "nvmf_set_max_subsystems", 00:16:50.269 "params": { 00:16:50.269 "max_subsystems": 1024 00:16:50.269 } 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "method": "nvmf_set_crdt", 00:16:50.269 "params": { 00:16:50.269 "crdt1": 0, 00:16:50.269 "crdt2": 0, 00:16:50.269 "crdt3": 0 00:16:50.269 } 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 }, 00:16:50.269 { 00:16:50.269 "subsystem": "iscsi", 00:16:50.269 "config": [ 00:16:50.269 { 00:16:50.269 "method": "iscsi_set_options", 00:16:50.269 "params": { 00:16:50.269 "node_base": "iqn.2016-06.io.spdk", 00:16:50.269 "max_sessions": 128, 00:16:50.269 "max_connections_per_session": 2, 00:16:50.269 "max_queue_depth": 64, 00:16:50.269 "default_time2wait": 2, 00:16:50.269 "default_time2retain": 20, 00:16:50.269 "first_burst_length": 8192, 00:16:50.269 "immediate_data": true, 00:16:50.269 "allow_duplicated_isid": false, 00:16:50.269 "error_recovery_level": 0, 00:16:50.269 "nop_timeout": 60, 00:16:50.269 "nop_in_interval": 30, 00:16:50.269 "disable_chap": false, 00:16:50.269 "require_chap": false, 00:16:50.269 "mutual_chap": false, 00:16:50.269 "chap_group": 0, 00:16:50.269 "max_large_datain_per_connection": 64, 00:16:50.269 "max_r2t_per_connection": 4, 00:16:50.269 "pdu_pool_size": 36864, 00:16:50.269 "immediate_data_pool_size": 16384, 00:16:50.269 "data_out_pool_size": 2048 00:16:50.269 } 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 } 00:16:50.269 ] 00:16:50.269 }' 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72455 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72455 ']' 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72455 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72455 00:16:50.269 killing process with pid 72455 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72455' 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72455 00:16:50.269 11:20:26 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72455 00:16:50.543 [2024-11-15 11:20:27.801223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:50.543 [2024-11-15 11:20:27.836596] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:50.543 [2024-11-15 11:20:27.836724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:50.543 [2024-11-15 11:20:27.845586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:50.543 [2024-11-15 11:20:27.845648] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:50.543 [2024-11-15 11:20:27.845665] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:50.543 [2024-11-15 11:20:27.845688] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:50.543 [2024-11-15 11:20:27.845837] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:52.495 11:20:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:52.495 11:20:29 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72530 00:16:52.495 11:20:29 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72530 00:16:52.495 11:20:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:52.495 "subsystems": [ 00:16:52.495 { 00:16:52.495 "subsystem": "fsdev", 00:16:52.495 "config": [ 00:16:52.495 { 00:16:52.495 "method": "fsdev_set_opts", 00:16:52.495 "params": { 00:16:52.495 "fsdev_io_pool_size": 65535, 00:16:52.495 "fsdev_io_cache_size": 256 00:16:52.495 } 00:16:52.495 } 00:16:52.495 ] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "keyring", 00:16:52.495 "config": [] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "iobuf", 00:16:52.495 "config": [ 00:16:52.495 { 00:16:52.495 "method": "iobuf_set_options", 00:16:52.495 "params": { 00:16:52.495 "small_pool_count": 8192, 00:16:52.495 "large_pool_count": 1024, 00:16:52.495 "small_bufsize": 8192, 00:16:52.495 "large_bufsize": 135168, 00:16:52.495 "enable_numa": false 00:16:52.495 } 00:16:52.495 } 00:16:52.495 ] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "sock", 00:16:52.495 "config": [ 00:16:52.495 { 00:16:52.495 "method": "sock_set_default_impl", 00:16:52.495 "params": { 00:16:52.495 "impl_name": "posix" 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "sock_impl_set_options", 00:16:52.495 "params": { 00:16:52.495 "impl_name": "ssl", 00:16:52.495 "recv_buf_size": 4096, 00:16:52.495 "send_buf_size": 4096, 00:16:52.495 "enable_recv_pipe": true, 00:16:52.495 "enable_quickack": false, 00:16:52.495 "enable_placement_id": 0, 00:16:52.495 "enable_zerocopy_send_server": true, 00:16:52.495 "enable_zerocopy_send_client": false, 00:16:52.495 "zerocopy_threshold": 0, 00:16:52.495 "tls_version": 0, 00:16:52.495 "enable_ktls": false 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "sock_impl_set_options", 00:16:52.495 "params": { 00:16:52.495 "impl_name": "posix", 00:16:52.495 "recv_buf_size": 2097152, 00:16:52.495 "send_buf_size": 2097152, 00:16:52.495 "enable_recv_pipe": true, 00:16:52.495 "enable_quickack": false, 00:16:52.495 "enable_placement_id": 0, 00:16:52.495 "enable_zerocopy_send_server": true, 00:16:52.495 "enable_zerocopy_send_client": false, 00:16:52.495 "zerocopy_threshold": 0, 00:16:52.495 "tls_version": 0, 00:16:52.495 "enable_ktls": false 00:16:52.495 } 00:16:52.495 } 00:16:52.495 ] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "vmd", 00:16:52.495 "config": [] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "accel", 00:16:52.495 "config": [ 00:16:52.495 { 00:16:52.495 "method": "accel_set_options", 00:16:52.495 "params": { 00:16:52.495 "small_cache_size": 128, 00:16:52.495 "large_cache_size": 16, 00:16:52.495 "task_count": 2048, 00:16:52.495 "sequence_count": 2048, 00:16:52.495 "buf_count": 2048 00:16:52.495 } 00:16:52.495 } 00:16:52.495 ] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "bdev", 00:16:52.495 "config": [ 00:16:52.495 { 00:16:52.495 "method": "bdev_set_options", 00:16:52.495 "params": { 00:16:52.495 "bdev_io_pool_size": 65535, 00:16:52.495 "bdev_io_cache_size": 256, 00:16:52.495 "bdev_auto_examine": true, 00:16:52.495 "iobuf_small_cache_size": 128, 00:16:52.495 "iobuf_large_cache_size": 16 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "bdev_raid_set_options", 00:16:52.495 "params": { 00:16:52.495 "process_window_size_kb": 1024, 00:16:52.495 "process_max_bandwidth_mb_sec": 0 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "bdev_iscsi_set_options", 00:16:52.495 "params": { 00:16:52.495 "timeout_sec": 30 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "bdev_nvme_set_options", 00:16:52.495 "params": { 00:16:52.495 "action_on_timeout": "none", 00:16:52.495 "timeout_us": 0, 00:16:52.495 "timeout_admin_us": 0, 00:16:52.495 "keep_alive_timeout_ms": 10000, 00:16:52.495 "arbitration_burst": 0, 00:16:52.495 "low_priority_weight": 0, 00:16:52.495 "medium_priority_weight": 0, 00:16:52.495 "high_priority_weight": 0, 00:16:52.495 "nvme_adminq_poll_period_us": 10000, 00:16:52.495 "nvme_ioq_poll_period_us": 0, 00:16:52.495 "io_queue_requests": 0, 00:16:52.495 "delay_cmd_submit": true, 00:16:52.495 "transport_retry_count": 4, 00:16:52.495 "bdev_retry_count": 3, 00:16:52.495 "transport_ack_timeout": 0, 00:16:52.495 "ctrlr_loss_timeout_sec": 0, 00:16:52.495 "reconnect_delay_sec": 0, 00:16:52.495 "fast_io_fail_timeout_sec": 0, 00:16:52.495 "disable_auto_failback": false, 00:16:52.495 "generate_uuids": false, 00:16:52.495 "transport_tos": 0, 00:16:52.495 "nvme_error_stat": false, 00:16:52.495 "rdma_srq_size": 0, 00:16:52.495 "io_path_stat": false, 00:16:52.495 "allow_accel_sequence": false, 00:16:52.495 "rdma_max_cq_size": 0, 00:16:52.495 "rdma_cm_event_timeout_ms": 0, 00:16:52.495 "dhchap_digests": [ 00:16:52.495 "sha256", 00:16:52.495 "sha384", 00:16:52.495 "sha512" 00:16:52.495 ], 00:16:52.495 "dhchap_dhgroups": [ 00:16:52.495 "null", 00:16:52.495 "ffdhe2048", 00:16:52.495 "ffdhe3072", 00:16:52.495 "ffdhe4096", 00:16:52.495 "ffdhe6144", 00:16:52.495 "ffdhe8192" 00:16:52.495 ] 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "bdev_nvme_set_hotplug", 00:16:52.495 "params": { 00:16:52.495 "period_us": 100000, 00:16:52.495 "enable": false 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "bdev_malloc_create", 00:16:52.495 "params": { 00:16:52.495 "name": "malloc0", 00:16:52.495 "num_blocks": 8192, 00:16:52.495 "block_size": 4096, 00:16:52.495 "physical_block_size": 4096, 00:16:52.495 "uuid": "48c66f61-30fa-464a-a750-ced9cf02f137", 00:16:52.495 "optimal_io_boundary": 0, 00:16:52.495 "md_size": 0, 00:16:52.495 "dif_type": 0, 00:16:52.495 "dif_is_head_of_md": false, 00:16:52.495 "dif_pi_format": 0 00:16:52.495 } 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "method": "bdev_wait_for_examine" 00:16:52.495 } 00:16:52.495 ] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "scsi", 00:16:52.495 "config": null 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "scheduler", 00:16:52.495 "config": [ 00:16:52.495 { 00:16:52.495 "method": "framework_set_scheduler", 00:16:52.495 "params": { 00:16:52.495 "name": "static" 00:16:52.495 } 00:16:52.495 } 00:16:52.495 ] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "vhost_scsi", 00:16:52.495 "config": [] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "vhost_blk", 00:16:52.495 "config": [] 00:16:52.495 }, 00:16:52.495 { 00:16:52.495 "subsystem": "ublk", 00:16:52.495 "config": [ 00:16:52.495 { 00:16:52.495 "method": "ublk_create_target", 00:16:52.495 "params": { 00:16:52.495 "cpumask": "1" 00:16:52.495 } 00:16:52.495 }, 00:16:52.496 { 00:16:52.496 "method": "ublk_start_disk", 00:16:52.496 "params": { 00:16:52.496 "bdev_name": "malloc0", 00:16:52.496 "ublk_id": 0, 00:16:52.496 "num_queues": 1, 00:16:52.496 "queue_depth": 128 00:16:52.496 } 00:16:52.496 } 00:16:52.496 ] 00:16:52.496 }, 00:16:52.496 { 00:16:52.496 "subsystem": "nbd", 00:16:52.496 "config": [] 00:16:52.496 }, 00:16:52.496 { 00:16:52.496 "subsystem": "nvmf", 00:16:52.496 "config": [ 00:16:52.496 { 00:16:52.496 "method": "nvmf_set_config", 00:16:52.496 "params": { 00:16:52.496 "discovery_filter": "match_any", 00:16:52.496 "admin_cmd_passthru": { 00:16:52.496 "identify_ctrlr": false 00:16:52.496 }, 00:16:52.496 "dhchap_digests": [ 00:16:52.496 "sha256", 00:16:52.496 "sha384", 00:16:52.496 "sha512" 00:16:52.496 ], 00:16:52.496 "dhchap_dhgroups": [ 00:16:52.496 "null", 00:16:52.496 "ffdhe2048", 00:16:52.496 "ffdhe3072", 00:16:52.496 "ffdhe4096", 00:16:52.496 "ffdhe6144", 00:16:52.496 "ffdhe81 11:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72530 ']' 00:16:52.496 11:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.496 92" 00:16:52.496 ] 00:16:52.496 } 00:16:52.496 }, 00:16:52.496 { 00:16:52.496 "method": "nvmf_set_max_subsystems", 00:16:52.496 "params": { 00:16:52.496 "max_subsystems": 1024 00:16:52.496 } 00:16:52.496 }, 00:16:52.496 { 00:16:52.496 "method": "nvmf_set_crdt", 00:16:52.496 "params": { 00:16:52.496 "crdt1": 0, 00:16:52.496 "crdt2": 0, 00:16:52.496 "crdt3": 0 00:16:52.496 } 00:16:52.496 } 00:16:52.496 ] 00:16:52.496 }, 00:16:52.496 { 00:16:52.496 "subsystem": "iscsi", 00:16:52.496 "config": [ 00:16:52.496 { 00:16:52.496 "method": "iscsi_set_options", 00:16:52.496 "params": { 00:16:52.496 "node_base": "iqn.2016-06.io.spdk", 00:16:52.496 "max_sessions": 128, 00:16:52.496 "max_connections_per_session": 2, 00:16:52.496 "max_queue_depth": 64, 00:16:52.496 "default_time2wait": 2, 00:16:52.496 "default_time2retain": 20, 00:16:52.496 "first_burst_length": 8192, 00:16:52.496 "immediate_data": true, 00:16:52.496 "allow_duplicated_isid": false, 00:16:52.496 "error_recovery_level": 0, 00:16:52.496 "nop_timeout": 60, 00:16:52.496 "nop_in_interval": 30, 00:16:52.496 "disable_chap": false, 00:16:52.496 "require_chap": false, 00:16:52.496 "mutual_chap": false, 00:16:52.496 "chap_group": 0, 00:16:52.496 "max_large_datain_per_connection": 64, 00:16:52.496 "max_r2t_per_connection": 4, 00:16:52.496 "pdu_pool_size": 36864, 00:16:52.496 "immediate_data_pool_size": 16384, 00:16:52.496 "data_out_pool_size": 2048 00:16:52.496 } 00:16:52.496 } 00:16:52.496 ] 00:16:52.496 } 00:16:52.496 ] 00:16:52.496 }' 00:16:52.496 11:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:52.496 11:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.496 11:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:52.496 11:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.496 [2024-11-15 11:20:29.813803] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:52.496 [2024-11-15 11:20:29.813943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72530 ] 00:16:52.754 [2024-11-15 11:20:29.996230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.754 [2024-11-15 11:20:30.109241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.129 [2024-11-15 11:20:31.127576] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:54.129 [2024-11-15 11:20:31.128660] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:54.129 [2024-11-15 11:20:31.135706] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:54.129 [2024-11-15 11:20:31.135787] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:54.129 [2024-11-15 11:20:31.135800] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:54.129 [2024-11-15 11:20:31.135808] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:54.129 [2024-11-15 11:20:31.143679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:54.129 [2024-11-15 11:20:31.143705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:54.129 [2024-11-15 11:20:31.150592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:54.129 [2024-11-15 11:20:31.150691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:54.129 [2024-11-15 11:20:31.167578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72530 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72530 ']' 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72530 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72530 00:16:54.129 killing process with pid 72530 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72530' 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72530 00:16:54.129 11:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72530 00:16:55.506 [2024-11-15 11:20:32.857702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:55.506 [2024-11-15 11:20:32.888668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:55.506 [2024-11-15 11:20:32.888788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:55.506 [2024-11-15 11:20:32.896590] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:55.506 [2024-11-15 11:20:32.896642] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:55.506 [2024-11-15 11:20:32.896652] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:55.506 [2024-11-15 11:20:32.896678] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:55.506 [2024-11-15 11:20:32.896824] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:57.408 11:20:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:57.408 00:16:57.408 real 0m10.186s 00:16:57.408 user 0m7.569s 00:16:57.408 sys 0m3.057s 00:16:57.408 11:20:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:57.408 11:20:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:57.408 ************************************ 00:16:57.408 END TEST test_save_ublk_config 00:16:57.408 ************************************ 00:16:57.666 11:20:34 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72617 00:16:57.666 11:20:34 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:57.666 11:20:34 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.666 11:20:34 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72617 00:16:57.666 11:20:34 ublk -- common/autotest_common.sh@833 -- # '[' -z 72617 ']' 00:16:57.666 11:20:34 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.666 11:20:34 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:57.666 11:20:34 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.666 11:20:34 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:57.666 11:20:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.666 [2024-11-15 11:20:34.940644] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:16:57.666 [2024-11-15 11:20:34.940768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72617 ] 00:16:57.924 [2024-11-15 11:20:35.125142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:57.924 [2024-11-15 11:20:35.278659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.924 [2024-11-15 11:20:35.278679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.858 11:20:36 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:58.858 11:20:36 ublk -- common/autotest_common.sh@866 -- # return 0 00:16:58.858 11:20:36 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:58.858 11:20:36 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:58.858 11:20:36 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:58.858 11:20:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.858 ************************************ 00:16:58.858 START TEST test_create_ublk 00:16:58.858 ************************************ 00:16:58.858 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:16:58.858 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:58.858 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.858 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.858 [2024-11-15 11:20:36.167583] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:58.858 [2024-11-15 11:20:36.170390] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:58.858 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.858 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:58.858 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:58.858 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.858 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.117 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.117 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:59.117 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:59.117 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.117 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.117 [2024-11-15 11:20:36.480751] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:59.117 [2024-11-15 11:20:36.481213] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:59.117 [2024-11-15 11:20:36.481235] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:59.117 [2024-11-15 11:20:36.481244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.117 [2024-11-15 11:20:36.488622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.117 [2024-11-15 11:20:36.488647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.117 [2024-11-15 11:20:36.496592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.117 [2024-11-15 11:20:36.497151] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:59.117 [2024-11-15 11:20:36.506694] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.117 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.117 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:59.117 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:59.117 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:59.117 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.117 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.375 11:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:59.375 { 00:16:59.375 "ublk_device": "/dev/ublkb0", 00:16:59.375 "id": 0, 00:16:59.375 "queue_depth": 512, 00:16:59.375 "num_queues": 4, 00:16:59.375 "bdev_name": "Malloc0" 00:16:59.375 } 00:16:59.375 ]' 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:59.375 11:20:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:59.375 11:20:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:59.634 fio: verification read phase will never start because write phase uses all of runtime 00:16:59.634 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:59.634 fio-3.35 00:16:59.634 Starting 1 process 00:17:09.674 00:17:09.674 fio_test: (groupid=0, jobs=1): err= 0: pid=72669: Fri Nov 15 11:20:46 2024 00:17:09.674 write: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(396MiB/10001msec); 0 zone resets 00:17:09.674 clat (usec): min=43, max=9792, avg=97.86, stdev=161.35 00:17:09.674 lat (usec): min=43, max=9822, avg=98.32, stdev=161.38 00:17:09.674 clat percentiles (usec): 00:17:09.674 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 64], 00:17:09.674 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:17:09.674 | 70.00th=[ 100], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 116], 00:17:09.674 | 99.00th=[ 141], 99.50th=[ 167], 99.90th=[ 3261], 99.95th=[ 3818], 00:17:09.674 | 99.99th=[ 4146] 00:17:09.674 bw ( KiB/s): min=19256, max=59272, per=100.00%, avg=40757.89, stdev=10086.80, samples=19 00:17:09.674 iops : min= 4814, max=14818, avg=10189.47, stdev=2521.70, samples=19 00:17:09.674 lat (usec) : 50=0.01%, 100=69.13%, 250=30.51%, 500=0.02%, 750=0.02% 00:17:09.674 lat (usec) : 1000=0.02% 00:17:09.674 lat (msec) : 2=0.07%, 4=0.19%, 10=0.03% 00:17:09.674 cpu : usr=1.77%, sys=7.82%, ctx=101338, majf=0, minf=797 00:17:09.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.674 issued rwts: total=0,101333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.674 00:17:09.674 Run status group 0 (all jobs): 00:17:09.674 WRITE: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=396MiB (415MB), run=10001-10001msec 00:17:09.674 00:17:09.674 Disk stats (read/write): 00:17:09.674 ublkb0: ios=0/100406, merge=0/0, ticks=0/8888, in_queue=8888, util=99.13% 00:17:09.674 11:20:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:09.674 11:20:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.674 11:20:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:09.674 [2024-11-15 11:20:46.989508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:09.674 [2024-11-15 11:20:47.019455] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:09.674 [2024-11-15 11:20:47.020332] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:09.674 [2024-11-15 11:20:47.026679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:09.674 [2024-11-15 11:20:47.027099] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:09.674 [2024-11-15 11:20:47.027138] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.674 11:20:47 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:09.674 [2024-11-15 11:20:47.042704] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:09.674 request: 00:17:09.674 { 00:17:09.674 "ublk_id": 0, 00:17:09.674 "method": "ublk_stop_disk", 00:17:09.674 "req_id": 1 00:17:09.674 } 00:17:09.674 Got JSON-RPC error response 00:17:09.674 response: 00:17:09.674 { 00:17:09.674 "code": -19, 00:17:09.674 "message": "No such device" 00:17:09.674 } 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.674 11:20:47 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.674 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:09.674 [2024-11-15 11:20:47.066721] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:09.931 [2024-11-15 11:20:47.074584] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:09.931 [2024-11-15 11:20:47.074644] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:09.931 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.932 11:20:47 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:09.932 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.932 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.497 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.497 11:20:47 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:10.497 11:20:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:10.497 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.497 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.497 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.755 11:20:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:10.755 11:20:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:10.755 11:20:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:10.755 11:20:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:10.755 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.755 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.755 11:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.755 11:20:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:10.755 11:20:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:10.755 ************************************ 00:17:10.755 END TEST test_create_ublk 00:17:10.755 ************************************ 00:17:10.755 11:20:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:10.755 00:17:10.755 real 0m11.853s 00:17:10.755 user 0m0.561s 00:17:10.755 sys 0m0.909s 00:17:10.755 11:20:48 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:10.755 11:20:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.755 11:20:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:10.755 11:20:48 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:10.755 11:20:48 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:10.755 11:20:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.755 ************************************ 00:17:10.755 START TEST test_create_multi_ublk 00:17:10.755 ************************************ 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.755 [2024-11-15 11:20:48.087582] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:10.755 [2024-11-15 11:20:48.090747] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.755 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.014 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.014 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.272 [2024-11-15 11:20:48.421788] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:11.272 [2024-11-15 11:20:48.422369] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:11.272 [2024-11-15 11:20:48.422393] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:11.272 [2024-11-15 11:20:48.422411] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.272 [2024-11-15 11:20:48.429621] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.272 [2024-11-15 11:20:48.429658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.272 [2024-11-15 11:20:48.437593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.272 [2024-11-15 11:20:48.438263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:11.272 [2024-11-15 11:20:48.468610] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.272 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.531 [2024-11-15 11:20:48.803771] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:11.531 [2024-11-15 11:20:48.804300] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:11.531 [2024-11-15 11:20:48.804327] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:11.531 [2024-11-15 11:20:48.804338] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.531 [2024-11-15 11:20:48.811640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.531 [2024-11-15 11:20:48.811670] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.531 [2024-11-15 11:20:48.819617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.531 [2024-11-15 11:20:48.820260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:11.531 [2024-11-15 11:20:48.843606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.531 11:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.790 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.790 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:11.790 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:11.790 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.790 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.790 [2024-11-15 11:20:49.171766] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:11.790 [2024-11-15 11:20:49.172322] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:11.790 [2024-11-15 11:20:49.172344] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:11.790 [2024-11-15 11:20:49.172358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.790 [2024-11-15 11:20:49.179626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.790 [2024-11-15 11:20:49.179663] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.790 [2024-11-15 11:20:49.187612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.790 [2024-11-15 11:20:49.188277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:12.048 [2024-11-15 11:20:49.196650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.048 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.048 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:12.048 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.048 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:12.048 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.048 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.308 [2024-11-15 11:20:49.523785] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:12.308 [2024-11-15 11:20:49.524315] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:12.308 [2024-11-15 11:20:49.524341] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:12.308 [2024-11-15 11:20:49.524353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:12.308 [2024-11-15 11:20:49.531643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:12.308 [2024-11-15 11:20:49.531675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:12.308 [2024-11-15 11:20:49.539600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:12.308 [2024-11-15 11:20:49.540270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:12.308 [2024-11-15 11:20:49.548661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:12.308 { 00:17:12.308 "ublk_device": "/dev/ublkb0", 00:17:12.308 "id": 0, 00:17:12.308 "queue_depth": 512, 00:17:12.308 "num_queues": 4, 00:17:12.308 "bdev_name": "Malloc0" 00:17:12.308 }, 00:17:12.308 { 00:17:12.308 "ublk_device": "/dev/ublkb1", 00:17:12.308 "id": 1, 00:17:12.308 "queue_depth": 512, 00:17:12.308 "num_queues": 4, 00:17:12.308 "bdev_name": "Malloc1" 00:17:12.308 }, 00:17:12.308 { 00:17:12.308 "ublk_device": "/dev/ublkb2", 00:17:12.308 "id": 2, 00:17:12.308 "queue_depth": 512, 00:17:12.308 "num_queues": 4, 00:17:12.308 "bdev_name": "Malloc2" 00:17:12.308 }, 00:17:12.308 { 00:17:12.308 "ublk_device": "/dev/ublkb3", 00:17:12.308 "id": 3, 00:17:12.308 "queue_depth": 512, 00:17:12.308 "num_queues": 4, 00:17:12.308 "bdev_name": "Malloc3" 00:17:12.308 } 00:17:12.308 ]' 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:12.308 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.566 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:12.825 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.825 11:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.825 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.084 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.084 [2024-11-15 11:20:50.426785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.084 [2024-11-15 11:20:50.466233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.084 [2024-11-15 11:20:50.467804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.084 [2024-11-15 11:20:50.475665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.084 [2024-11-15 11:20:50.476011] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:13.084 [2024-11-15 11:20:50.476037] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.343 [2024-11-15 11:20:50.490686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.343 [2024-11-15 11:20:50.526232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.343 [2024-11-15 11:20:50.527619] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.343 [2024-11-15 11:20:50.533618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.343 [2024-11-15 11:20:50.533954] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:13.343 [2024-11-15 11:20:50.533978] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.343 [2024-11-15 11:20:50.546733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.343 [2024-11-15 11:20:50.585225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.343 [2024-11-15 11:20:50.586677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.343 [2024-11-15 11:20:50.594638] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.343 [2024-11-15 11:20:50.594962] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:13.343 [2024-11-15 11:20:50.594981] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.343 [2024-11-15 11:20:50.609718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.343 [2024-11-15 11:20:50.641191] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.343 [2024-11-15 11:20:50.642343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.343 [2024-11-15 11:20:50.649608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.343 [2024-11-15 11:20:50.649936] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:13.343 [2024-11-15 11:20:50.649954] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.343 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:13.602 [2024-11-15 11:20:50.849676] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:13.602 [2024-11-15 11:20:50.857587] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:13.602 [2024-11-15 11:20:50.857629] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:13.602 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:13.602 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.602 11:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:13.602 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.602 11:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.539 11:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.539 11:20:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.539 11:20:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:14.539 11:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.539 11:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.798 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.798 11:20:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.798 11:20:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:14.798 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.798 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.366 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.366 11:20:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.366 11:20:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:15.366 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.366 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:15.625 ************************************ 00:17:15.625 END TEST test_create_multi_ublk 00:17:15.625 ************************************ 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:15.625 00:17:15.625 real 0m4.916s 00:17:15.625 user 0m1.008s 00:17:15.625 sys 0m0.223s 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:15.625 11:20:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 11:20:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:15.884 11:20:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:15.884 11:20:53 ublk -- ublk/ublk.sh@130 -- # killprocess 72617 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@952 -- # '[' -z 72617 ']' 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@956 -- # kill -0 72617 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@957 -- # uname 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72617 00:17:15.884 killing process with pid 72617 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72617' 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@971 -- # kill 72617 00:17:15.884 11:20:53 ublk -- common/autotest_common.sh@976 -- # wait 72617 00:17:17.299 [2024-11-15 11:20:54.390847] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:17.299 [2024-11-15 11:20:54.390951] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:18.706 ************************************ 00:17:18.706 END TEST ublk 00:17:18.706 ************************************ 00:17:18.706 00:17:18.706 real 0m31.438s 00:17:18.706 user 0m44.995s 00:17:18.706 sys 0m10.005s 00:17:18.706 11:20:55 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.706 11:20:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:18.706 11:20:55 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:18.706 11:20:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:18.706 11:20:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.706 11:20:55 -- common/autotest_common.sh@10 -- # set +x 00:17:18.706 ************************************ 00:17:18.706 START TEST ublk_recovery 00:17:18.706 ************************************ 00:17:18.706 11:20:55 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:18.706 * Looking for test storage... 00:17:18.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:18.706 11:20:55 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.706 11:20:55 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.706 11:20:55 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:18.706 11:20:56 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.706 11:20:56 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:18.706 11:20:56 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.706 11:20:56 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.706 --rc genhtml_branch_coverage=1 00:17:18.706 --rc genhtml_function_coverage=1 00:17:18.706 --rc genhtml_legend=1 00:17:18.706 --rc geninfo_all_blocks=1 00:17:18.706 --rc geninfo_unexecuted_blocks=1 00:17:18.706 00:17:18.706 ' 00:17:18.706 11:20:56 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.706 --rc genhtml_branch_coverage=1 00:17:18.706 --rc genhtml_function_coverage=1 00:17:18.706 --rc genhtml_legend=1 00:17:18.706 --rc geninfo_all_blocks=1 00:17:18.706 --rc geninfo_unexecuted_blocks=1 00:17:18.706 00:17:18.706 ' 00:17:18.706 11:20:56 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.706 --rc genhtml_branch_coverage=1 00:17:18.706 --rc genhtml_function_coverage=1 00:17:18.706 --rc genhtml_legend=1 00:17:18.706 --rc geninfo_all_blocks=1 00:17:18.706 --rc geninfo_unexecuted_blocks=1 00:17:18.706 00:17:18.706 ' 00:17:18.706 11:20:56 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.706 --rc genhtml_branch_coverage=1 00:17:18.706 --rc genhtml_function_coverage=1 00:17:18.706 --rc genhtml_legend=1 00:17:18.706 --rc geninfo_all_blocks=1 00:17:18.706 --rc geninfo_unexecuted_blocks=1 00:17:18.706 00:17:18.706 ' 00:17:18.706 11:20:56 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:18.706 11:20:56 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:18.706 11:20:56 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:18.965 11:20:56 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73059 00:17:18.965 11:20:56 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.965 11:20:56 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73059 00:17:18.965 11:20:56 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73059 ']' 00:17:18.965 11:20:56 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.965 11:20:56 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:18.965 11:20:56 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:18.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.965 11:20:56 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.965 11:20:56 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:18.965 11:20:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.965 [2024-11-15 11:20:56.212188] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:17:18.965 [2024-11-15 11:20:56.212307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73059 ] 00:17:19.224 [2024-11-15 11:20:56.389328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:19.224 [2024-11-15 11:20:56.526660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.224 [2024-11-15 11:20:56.526685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.157 11:20:57 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.157 11:20:57 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:17:20.157 11:20:57 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:20.157 11:20:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.157 11:20:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.157 [2024-11-15 11:20:57.509587] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:20.157 [2024-11-15 11:20:57.512660] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:20.157 11:20:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.157 11:20:57 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:20.157 11:20:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.157 11:20:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.416 malloc0 00:17:20.416 11:20:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.416 11:20:57 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:20.416 11:20:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.416 11:20:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.416 [2024-11-15 11:20:57.686301] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:20.416 [2024-11-15 11:20:57.686465] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:20.416 [2024-11-15 11:20:57.686484] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:20.416 [2024-11-15 11:20:57.686499] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:20.416 [2024-11-15 11:20:57.694775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:20.416 [2024-11-15 11:20:57.694806] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:20.416 [2024-11-15 11:20:57.701605] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:20.416 [2024-11-15 11:20:57.701790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:20.416 [2024-11-15 11:20:57.712624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:20.416 1 00:17:20.416 11:20:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.416 11:20:57 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:21.352 11:20:58 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73099 00:17:21.352 11:20:58 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:21.352 11:20:58 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:21.610 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.610 fio-3.35 00:17:21.610 Starting 1 process 00:17:26.877 11:21:03 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73059 00:17:26.877 11:21:03 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:32.188 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73059 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:32.188 11:21:08 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73211 00:17:32.188 11:21:08 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.188 11:21:08 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73211 00:17:32.188 11:21:08 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:32.188 11:21:08 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73211 ']' 00:17:32.188 11:21:08 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.188 11:21:08 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:32.188 11:21:08 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.188 11:21:08 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:32.188 11:21:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.188 [2024-11-15 11:21:08.845420] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:17:32.188 [2024-11-15 11:21:08.845549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 00:17:32.188 [2024-11-15 11:21:09.028963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:32.188 [2024-11-15 11:21:09.174438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.188 [2024-11-15 11:21:09.174462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:17:33.124 11:21:10 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.124 [2024-11-15 11:21:10.241579] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:33.124 [2024-11-15 11:21:10.244645] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.124 11:21:10 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.124 malloc0 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.124 11:21:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.124 [2024-11-15 11:21:10.391749] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:33.124 [2024-11-15 11:21:10.391793] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:33.124 [2024-11-15 11:21:10.391805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:33.124 [2024-11-15 11:21:10.399623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:33.124 [2024-11-15 11:21:10.399648] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:33.124 [2024-11-15 11:21:10.399658] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:33.124 [2024-11-15 11:21:10.399754] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:33.124 1 00:17:33.124 11:21:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.124 11:21:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73099 00:17:33.124 [2024-11-15 11:21:10.407592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:33.124 [2024-11-15 11:21:10.414225] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:33.124 [2024-11-15 11:21:10.421771] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:33.124 [2024-11-15 11:21:10.421799] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:29.421 00:18:29.421 fio_test: (groupid=0, jobs=1): err= 0: pid=73102: Fri Nov 15 11:21:58 2024 00:18:29.421 read: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(4921MiB/60002msec) 00:18:29.421 slat (usec): min=2, max=341, avg= 7.79, stdev= 2.40 00:18:29.421 clat (usec): min=1368, max=6701.1k, avg=3019.70, stdev=49201.89 00:18:29.421 lat (usec): min=1375, max=6701.2k, avg=3027.49, stdev=49201.89 00:18:29.421 clat percentiles (usec): 00:18:29.421 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:18:29.421 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:18:29.421 | 70.00th=[ 2835], 80.00th=[ 3064], 90.00th=[ 3228], 95.00th=[ 3818], 00:18:29.421 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 6915], 99.95th=[ 7373], 00:18:29.421 | 99.99th=[12911] 00:18:29.421 bw ( KiB/s): min= 3936, max=105256, per=100.00%, avg=93539.97, stdev=15337.85, samples=107 00:18:29.421 iops : min= 984, max=26314, avg=23384.94, stdev=3834.47, samples=107 00:18:29.421 write: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(4917MiB/60002msec); 0 zone resets 00:18:29.421 slat (usec): min=2, max=349, avg= 7.81, stdev= 2.45 00:18:29.421 clat (usec): min=1399, max=6701.4k, avg=3062.23, stdev=46240.77 00:18:29.421 lat (usec): min=1409, max=6701.4k, avg=3070.04, stdev=46240.78 00:18:29.421 clat percentiles (usec): 00:18:29.421 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2409], 00:18:29.421 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:18:29.421 | 70.00th=[ 2802], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3818], 00:18:29.421 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7111], 99.95th=[ 7504], 00:18:29.421 | 99.99th=[13173] 00:18:29.421 bw ( KiB/s): min= 3704, max=104744, per=100.00%, avg=93459.14, stdev=15327.42, samples=107 00:18:29.421 iops : min= 926, max=26186, avg=23364.72, stdev=3831.83, samples=107 00:18:29.421 lat (msec) : 2=1.27%, 4=94.68%, 10=4.03%, 20=0.01%, >=2000=0.01% 00:18:29.421 cpu : usr=11.87%, sys=32.80%, ctx=110651, majf=0, minf=14 00:18:29.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:29.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:29.421 issued rwts: total=1259859,1258696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:29.421 00:18:29.421 Run status group 0 (all jobs): 00:18:29.421 READ: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=4921MiB (5160MB), run=60002-60002msec 00:18:29.421 WRITE: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=4917MiB (5156MB), run=60002-60002msec 00:18:29.421 00:18:29.421 Disk stats (read/write): 00:18:29.421 ublkb1: ios=1257642/1256523, merge=0/0, ticks=3687182/3604138, in_queue=7291321, util=99.95% 00:18:29.421 11:21:58 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:29.421 11:21:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.421 11:21:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.421 [2024-11-15 11:21:59.004249] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:29.421 [2024-11-15 11:21:59.041646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:29.421 [2024-11-15 11:21:59.042110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:29.422 [2024-11-15 11:21:59.050673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:29.422 [2024-11-15 11:21:59.054782] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:29.422 [2024-11-15 11:21:59.054809] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.422 11:21:59 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.422 [2024-11-15 11:21:59.058863] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:29.422 [2024-11-15 11:21:59.066822] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:29.422 [2024-11-15 11:21:59.066866] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.422 11:21:59 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:29.422 11:21:59 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:29.422 11:21:59 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73211 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 73211 ']' 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 73211 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73211 00:18:29.422 killing process with pid 73211 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73211' 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@971 -- # kill 73211 00:18:29.422 11:21:59 ublk_recovery -- common/autotest_common.sh@976 -- # wait 73211 00:18:29.422 [2024-11-15 11:22:00.739571] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:29.422 [2024-11-15 11:22:00.739664] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:29.422 00:18:29.422 real 1m6.318s 00:18:29.422 user 1m49.990s 00:18:29.422 sys 0m38.436s 00:18:29.422 11:22:02 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:29.422 11:22:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.422 ************************************ 00:18:29.422 END TEST ublk_recovery 00:18:29.422 ************************************ 00:18:29.422 11:22:02 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:29.422 11:22:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:29.422 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.422 11:22:02 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:18:29.422 11:22:02 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:29.422 11:22:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:29.422 11:22:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:29.422 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.422 ************************************ 00:18:29.422 START TEST ftl 00:18:29.422 ************************************ 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:29.422 * Looking for test storage... 00:18:29.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.422 11:22:02 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.422 11:22:02 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.422 11:22:02 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.422 11:22:02 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.422 11:22:02 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.422 11:22:02 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.422 11:22:02 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.422 11:22:02 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:29.422 11:22:02 ftl -- scripts/common.sh@345 -- # : 1 00:18:29.422 11:22:02 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.422 11:22:02 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.422 11:22:02 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:29.422 11:22:02 ftl -- scripts/common.sh@353 -- # local d=1 00:18:29.422 11:22:02 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.422 11:22:02 ftl -- scripts/common.sh@355 -- # echo 1 00:18:29.422 11:22:02 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.422 11:22:02 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@353 -- # local d=2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.422 11:22:02 ftl -- scripts/common.sh@355 -- # echo 2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.422 11:22:02 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.422 11:22:02 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.422 11:22:02 ftl -- scripts/common.sh@368 -- # return 0 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.422 --rc genhtml_branch_coverage=1 00:18:29.422 --rc genhtml_function_coverage=1 00:18:29.422 --rc genhtml_legend=1 00:18:29.422 --rc geninfo_all_blocks=1 00:18:29.422 --rc geninfo_unexecuted_blocks=1 00:18:29.422 00:18:29.422 ' 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.422 --rc genhtml_branch_coverage=1 00:18:29.422 --rc genhtml_function_coverage=1 00:18:29.422 --rc genhtml_legend=1 00:18:29.422 --rc geninfo_all_blocks=1 00:18:29.422 --rc geninfo_unexecuted_blocks=1 00:18:29.422 00:18:29.422 ' 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.422 --rc genhtml_branch_coverage=1 00:18:29.422 --rc genhtml_function_coverage=1 00:18:29.422 --rc genhtml_legend=1 00:18:29.422 --rc geninfo_all_blocks=1 00:18:29.422 --rc geninfo_unexecuted_blocks=1 00:18:29.422 00:18:29.422 ' 00:18:29.422 11:22:02 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:29.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.422 --rc genhtml_branch_coverage=1 00:18:29.422 --rc genhtml_function_coverage=1 00:18:29.422 --rc genhtml_legend=1 00:18:29.422 --rc geninfo_all_blocks=1 00:18:29.422 --rc geninfo_unexecuted_blocks=1 00:18:29.422 00:18:29.422 ' 00:18:29.422 11:22:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:29.422 11:22:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:29.422 11:22:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:29.422 11:22:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:29.422 11:22:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:29.422 11:22:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:29.422 11:22:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:29.422 11:22:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:29.422 11:22:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:29.422 11:22:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.422 11:22:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.422 11:22:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:29.422 11:22:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:29.422 11:22:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:29.422 11:22:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:29.422 11:22:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:29.422 11:22:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:29.422 11:22:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.422 11:22:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.422 11:22:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:29.422 11:22:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:29.423 11:22:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:29.423 11:22:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:29.423 11:22:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:29.423 11:22:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:29.423 11:22:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:29.423 11:22:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:29.423 11:22:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:29.423 11:22:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:29.423 11:22:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:29.423 11:22:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:29.423 11:22:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:29.423 11:22:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:29.423 11:22:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:29.423 11:22:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:29.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:29.423 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:29.423 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:29.423 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:29.423 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:29.423 11:22:03 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:29.423 11:22:03 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74022 00:18:29.423 11:22:03 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74022 00:18:29.423 11:22:03 ftl -- common/autotest_common.sh@833 -- # '[' -z 74022 ']' 00:18:29.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.423 11:22:03 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.423 11:22:03 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:29.423 11:22:03 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.423 11:22:03 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:29.423 11:22:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:29.423 [2024-11-15 11:22:03.509497] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:18:29.423 [2024-11-15 11:22:03.509643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74022 ] 00:18:29.423 [2024-11-15 11:22:03.691748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.423 [2024-11-15 11:22:03.812712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.423 11:22:04 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:29.423 11:22:04 ftl -- common/autotest_common.sh@866 -- # return 0 00:18:29.423 11:22:04 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:29.423 11:22:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:29.423 11:22:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:29.423 11:22:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@50 -- # break 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@63 -- # break 00:18:29.423 11:22:06 ftl -- ftl/ftl.sh@66 -- # killprocess 74022 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@952 -- # '[' -z 74022 ']' 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@956 -- # kill -0 74022 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@957 -- # uname 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74022 00:18:29.423 killing process with pid 74022 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74022' 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@971 -- # kill 74022 00:18:29.423 11:22:06 ftl -- common/autotest_common.sh@976 -- # wait 74022 00:18:31.954 11:22:08 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:31.954 11:22:08 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:31.954 11:22:08 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:31.954 11:22:08 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:31.954 11:22:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:31.954 ************************************ 00:18:31.954 START TEST ftl_fio_basic 00:18:31.954 ************************************ 00:18:31.954 11:22:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:31.954 * Looking for test storage... 00:18:31.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:31.954 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.955 --rc genhtml_branch_coverage=1 00:18:31.955 --rc genhtml_function_coverage=1 00:18:31.955 --rc genhtml_legend=1 00:18:31.955 --rc geninfo_all_blocks=1 00:18:31.955 --rc geninfo_unexecuted_blocks=1 00:18:31.955 00:18:31.955 ' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.955 --rc genhtml_branch_coverage=1 00:18:31.955 --rc genhtml_function_coverage=1 00:18:31.955 --rc genhtml_legend=1 00:18:31.955 --rc geninfo_all_blocks=1 00:18:31.955 --rc geninfo_unexecuted_blocks=1 00:18:31.955 00:18:31.955 ' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.955 --rc genhtml_branch_coverage=1 00:18:31.955 --rc genhtml_function_coverage=1 00:18:31.955 --rc genhtml_legend=1 00:18:31.955 --rc geninfo_all_blocks=1 00:18:31.955 --rc geninfo_unexecuted_blocks=1 00:18:31.955 00:18:31.955 ' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:31.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.955 --rc genhtml_branch_coverage=1 00:18:31.955 --rc genhtml_function_coverage=1 00:18:31.955 --rc genhtml_legend=1 00:18:31.955 --rc geninfo_all_blocks=1 00:18:31.955 --rc geninfo_unexecuted_blocks=1 00:18:31.955 00:18:31.955 ' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74165 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74165 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 74165 ']' 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.955 11:22:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:31.955 [2024-11-15 11:22:09.277716] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:18:31.955 [2024-11-15 11:22:09.278023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74165 ] 00:18:32.219 [2024-11-15 11:22:09.472210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.219 [2024-11-15 11:22:09.588105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.219 [2024-11-15 11:22:09.588242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.219 [2024-11-15 11:22:09.588276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:33.148 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:33.405 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:33.663 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:33.663 { 00:18:33.663 "name": "nvme0n1", 00:18:33.663 "aliases": [ 00:18:33.663 "b61b3e74-bd81-4270-a5c4-95fc2a2fa034" 00:18:33.663 ], 00:18:33.663 "product_name": "NVMe disk", 00:18:33.663 "block_size": 4096, 00:18:33.663 "num_blocks": 1310720, 00:18:33.663 "uuid": "b61b3e74-bd81-4270-a5c4-95fc2a2fa034", 00:18:33.663 "numa_id": -1, 00:18:33.663 "assigned_rate_limits": { 00:18:33.663 "rw_ios_per_sec": 0, 00:18:33.663 "rw_mbytes_per_sec": 0, 00:18:33.663 "r_mbytes_per_sec": 0, 00:18:33.663 "w_mbytes_per_sec": 0 00:18:33.663 }, 00:18:33.663 "claimed": false, 00:18:33.663 "zoned": false, 00:18:33.663 "supported_io_types": { 00:18:33.663 "read": true, 00:18:33.663 "write": true, 00:18:33.663 "unmap": true, 00:18:33.663 "flush": true, 00:18:33.663 "reset": true, 00:18:33.663 "nvme_admin": true, 00:18:33.663 "nvme_io": true, 00:18:33.663 "nvme_io_md": false, 00:18:33.663 "write_zeroes": true, 00:18:33.663 "zcopy": false, 00:18:33.663 "get_zone_info": false, 00:18:33.663 "zone_management": false, 00:18:33.663 "zone_append": false, 00:18:33.663 "compare": true, 00:18:33.663 "compare_and_write": false, 00:18:33.663 "abort": true, 00:18:33.663 "seek_hole": false, 00:18:33.663 "seek_data": false, 00:18:33.663 "copy": true, 00:18:33.663 "nvme_iov_md": false 00:18:33.663 }, 00:18:33.663 "driver_specific": { 00:18:33.663 "nvme": [ 00:18:33.663 { 00:18:33.663 "pci_address": "0000:00:11.0", 00:18:33.663 "trid": { 00:18:33.663 "trtype": "PCIe", 00:18:33.663 "traddr": "0000:00:11.0" 00:18:33.663 }, 00:18:33.663 "ctrlr_data": { 00:18:33.663 "cntlid": 0, 00:18:33.663 "vendor_id": "0x1b36", 00:18:33.663 "model_number": "QEMU NVMe Ctrl", 00:18:33.663 "serial_number": "12341", 00:18:33.663 "firmware_revision": "8.0.0", 00:18:33.663 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:33.663 "oacs": { 00:18:33.663 "security": 0, 00:18:33.663 "format": 1, 00:18:33.663 "firmware": 0, 00:18:33.663 "ns_manage": 1 00:18:33.663 }, 00:18:33.663 "multi_ctrlr": false, 00:18:33.663 "ana_reporting": false 00:18:33.663 }, 00:18:33.663 "vs": { 00:18:33.663 "nvme_version": "1.4" 00:18:33.663 }, 00:18:33.663 "ns_data": { 00:18:33.663 "id": 1, 00:18:33.663 "can_share": false 00:18:33.663 } 00:18:33.663 } 00:18:33.663 ], 00:18:33.663 "mp_policy": "active_passive" 00:18:33.663 } 00:18:33.663 } 00:18:33.663 ]' 00:18:33.663 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:33.663 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:33.663 11:22:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:33.663 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:33.921 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:33.921 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:34.178 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=dfa84a0c-f871-490f-a753-9532dd659c6a 00:18:34.178 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dfa84a0c-f871-490f-a753-9532dd659c6a 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:34.435 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:34.693 { 00:18:34.693 "name": "a8b2631c-4ac4-4b7e-8dc8-860a288d6631", 00:18:34.693 "aliases": [ 00:18:34.693 "lvs/nvme0n1p0" 00:18:34.693 ], 00:18:34.693 "product_name": "Logical Volume", 00:18:34.693 "block_size": 4096, 00:18:34.693 "num_blocks": 26476544, 00:18:34.693 "uuid": "a8b2631c-4ac4-4b7e-8dc8-860a288d6631", 00:18:34.693 "assigned_rate_limits": { 00:18:34.693 "rw_ios_per_sec": 0, 00:18:34.693 "rw_mbytes_per_sec": 0, 00:18:34.693 "r_mbytes_per_sec": 0, 00:18:34.693 "w_mbytes_per_sec": 0 00:18:34.693 }, 00:18:34.693 "claimed": false, 00:18:34.693 "zoned": false, 00:18:34.693 "supported_io_types": { 00:18:34.693 "read": true, 00:18:34.693 "write": true, 00:18:34.693 "unmap": true, 00:18:34.693 "flush": false, 00:18:34.693 "reset": true, 00:18:34.693 "nvme_admin": false, 00:18:34.693 "nvme_io": false, 00:18:34.693 "nvme_io_md": false, 00:18:34.693 "write_zeroes": true, 00:18:34.693 "zcopy": false, 00:18:34.693 "get_zone_info": false, 00:18:34.693 "zone_management": false, 00:18:34.693 "zone_append": false, 00:18:34.693 "compare": false, 00:18:34.693 "compare_and_write": false, 00:18:34.693 "abort": false, 00:18:34.693 "seek_hole": true, 00:18:34.693 "seek_data": true, 00:18:34.693 "copy": false, 00:18:34.693 "nvme_iov_md": false 00:18:34.693 }, 00:18:34.693 "driver_specific": { 00:18:34.693 "lvol": { 00:18:34.693 "lvol_store_uuid": "dfa84a0c-f871-490f-a753-9532dd659c6a", 00:18:34.693 "base_bdev": "nvme0n1", 00:18:34.693 "thin_provision": true, 00:18:34.693 "num_allocated_clusters": 0, 00:18:34.693 "snapshot": false, 00:18:34.693 "clone": false, 00:18:34.693 "esnap_clone": false 00:18:34.693 } 00:18:34.693 } 00:18:34.693 } 00:18:34.693 ]' 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:34.693 11:22:11 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:34.952 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:35.210 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:35.210 { 00:18:35.210 "name": "a8b2631c-4ac4-4b7e-8dc8-860a288d6631", 00:18:35.210 "aliases": [ 00:18:35.210 "lvs/nvme0n1p0" 00:18:35.210 ], 00:18:35.210 "product_name": "Logical Volume", 00:18:35.210 "block_size": 4096, 00:18:35.210 "num_blocks": 26476544, 00:18:35.210 "uuid": "a8b2631c-4ac4-4b7e-8dc8-860a288d6631", 00:18:35.210 "assigned_rate_limits": { 00:18:35.210 "rw_ios_per_sec": 0, 00:18:35.210 "rw_mbytes_per_sec": 0, 00:18:35.210 "r_mbytes_per_sec": 0, 00:18:35.210 "w_mbytes_per_sec": 0 00:18:35.210 }, 00:18:35.210 "claimed": false, 00:18:35.210 "zoned": false, 00:18:35.210 "supported_io_types": { 00:18:35.210 "read": true, 00:18:35.210 "write": true, 00:18:35.210 "unmap": true, 00:18:35.210 "flush": false, 00:18:35.210 "reset": true, 00:18:35.210 "nvme_admin": false, 00:18:35.210 "nvme_io": false, 00:18:35.210 "nvme_io_md": false, 00:18:35.210 "write_zeroes": true, 00:18:35.211 "zcopy": false, 00:18:35.211 "get_zone_info": false, 00:18:35.211 "zone_management": false, 00:18:35.211 "zone_append": false, 00:18:35.211 "compare": false, 00:18:35.211 "compare_and_write": false, 00:18:35.211 "abort": false, 00:18:35.211 "seek_hole": true, 00:18:35.211 "seek_data": true, 00:18:35.211 "copy": false, 00:18:35.211 "nvme_iov_md": false 00:18:35.211 }, 00:18:35.211 "driver_specific": { 00:18:35.211 "lvol": { 00:18:35.211 "lvol_store_uuid": "dfa84a0c-f871-490f-a753-9532dd659c6a", 00:18:35.211 "base_bdev": "nvme0n1", 00:18:35.211 "thin_provision": true, 00:18:35.211 "num_allocated_clusters": 0, 00:18:35.211 "snapshot": false, 00:18:35.211 "clone": false, 00:18:35.211 "esnap_clone": false 00:18:35.211 } 00:18:35.211 } 00:18:35.211 } 00:18:35.211 ]' 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:35.211 11:22:12 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:35.470 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:35.470 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8b2631c-4ac4-4b7e-8dc8-860a288d6631 00:18:35.728 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:35.728 { 00:18:35.728 "name": "a8b2631c-4ac4-4b7e-8dc8-860a288d6631", 00:18:35.728 "aliases": [ 00:18:35.728 "lvs/nvme0n1p0" 00:18:35.728 ], 00:18:35.728 "product_name": "Logical Volume", 00:18:35.728 "block_size": 4096, 00:18:35.728 "num_blocks": 26476544, 00:18:35.728 "uuid": "a8b2631c-4ac4-4b7e-8dc8-860a288d6631", 00:18:35.728 "assigned_rate_limits": { 00:18:35.728 "rw_ios_per_sec": 0, 00:18:35.728 "rw_mbytes_per_sec": 0, 00:18:35.728 "r_mbytes_per_sec": 0, 00:18:35.728 "w_mbytes_per_sec": 0 00:18:35.728 }, 00:18:35.728 "claimed": false, 00:18:35.728 "zoned": false, 00:18:35.728 "supported_io_types": { 00:18:35.728 "read": true, 00:18:35.728 "write": true, 00:18:35.728 "unmap": true, 00:18:35.728 "flush": false, 00:18:35.728 "reset": true, 00:18:35.728 "nvme_admin": false, 00:18:35.728 "nvme_io": false, 00:18:35.728 "nvme_io_md": false, 00:18:35.728 "write_zeroes": true, 00:18:35.728 "zcopy": false, 00:18:35.728 "get_zone_info": false, 00:18:35.728 "zone_management": false, 00:18:35.728 "zone_append": false, 00:18:35.728 "compare": false, 00:18:35.728 "compare_and_write": false, 00:18:35.728 "abort": false, 00:18:35.728 "seek_hole": true, 00:18:35.728 "seek_data": true, 00:18:35.728 "copy": false, 00:18:35.728 "nvme_iov_md": false 00:18:35.728 }, 00:18:35.728 "driver_specific": { 00:18:35.728 "lvol": { 00:18:35.728 "lvol_store_uuid": "dfa84a0c-f871-490f-a753-9532dd659c6a", 00:18:35.728 "base_bdev": "nvme0n1", 00:18:35.728 "thin_provision": true, 00:18:35.728 "num_allocated_clusters": 0, 00:18:35.728 "snapshot": false, 00:18:35.728 "clone": false, 00:18:35.728 "esnap_clone": false 00:18:35.728 } 00:18:35.728 } 00:18:35.728 } 00:18:35.728 ]' 00:18:35.728 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:35.728 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:35.728 11:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:35.728 11:22:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:35.728 11:22:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:35.728 11:22:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:35.728 11:22:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:35.728 11:22:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:35.728 11:22:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a8b2631c-4ac4-4b7e-8dc8-860a288d6631 -c nvc0n1p0 --l2p_dram_limit 60 00:18:35.988 [2024-11-15 11:22:13.226699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.226911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.988 [2024-11-15 11:22:13.226941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.988 [2024-11-15 11:22:13.226952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.227055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.227074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.988 [2024-11-15 11:22:13.227088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:35.988 [2024-11-15 11:22:13.227100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.227134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.988 [2024-11-15 11:22:13.228177] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.988 [2024-11-15 11:22:13.228219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.228231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.988 [2024-11-15 11:22:13.228245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:18:35.988 [2024-11-15 11:22:13.228255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.228395] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 133021ba-9021-42d4-961b-67bd3eac4ea9 00:18:35.988 [2024-11-15 11:22:13.229888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.230041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:35.988 [2024-11-15 11:22:13.230062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:35.988 [2024-11-15 11:22:13.230076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.237731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.237773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.988 [2024-11-15 11:22:13.237787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.556 ms 00:18:35.988 [2024-11-15 11:22:13.237802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.237931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.237953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.988 [2024-11-15 11:22:13.237966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:35.988 [2024-11-15 11:22:13.237984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.238053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.238068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.988 [2024-11-15 11:22:13.238079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:35.988 [2024-11-15 11:22:13.238092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.238132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.988 [2024-11-15 11:22:13.243352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.243384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.988 [2024-11-15 11:22:13.243399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.238 ms 00:18:35.988 [2024-11-15 11:22:13.243413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.243479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.243496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.988 [2024-11-15 11:22:13.243510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:35.988 [2024-11-15 11:22:13.243520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.243603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:35.988 [2024-11-15 11:22:13.243770] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.988 [2024-11-15 11:22:13.243802] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.988 [2024-11-15 11:22:13.243819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:35.988 [2024-11-15 11:22:13.243836] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.988 [2024-11-15 11:22:13.243849] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.988 [2024-11-15 11:22:13.243863] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:35.988 [2024-11-15 11:22:13.243874] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.988 [2024-11-15 11:22:13.243886] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.988 [2024-11-15 11:22:13.243898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.988 [2024-11-15 11:22:13.243912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.243928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.988 [2024-11-15 11:22:13.243943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:35.988 [2024-11-15 11:22:13.243954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.244055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.988 [2024-11-15 11:22:13.244070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.988 [2024-11-15 11:22:13.244083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:35.988 [2024-11-15 11:22:13.244094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.988 [2024-11-15 11:22:13.244208] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.988 [2024-11-15 11:22:13.244221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.988 [2024-11-15 11:22:13.244239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.988 [2024-11-15 11:22:13.244249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.988 [2024-11-15 11:22:13.244262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.988 [2024-11-15 11:22:13.244271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.988 [2024-11-15 11:22:13.244283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:35.988 [2024-11-15 11:22:13.244294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.988 [2024-11-15 11:22:13.244306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:35.988 [2024-11-15 11:22:13.244315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.988 [2024-11-15 11:22:13.244327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.989 [2024-11-15 11:22:13.244338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:35.989 [2024-11-15 11:22:13.244349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.989 [2024-11-15 11:22:13.244359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.989 [2024-11-15 11:22:13.244370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:35.989 [2024-11-15 11:22:13.244381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.989 [2024-11-15 11:22:13.244406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:35.989 [2024-11-15 11:22:13.244419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.989 [2024-11-15 11:22:13.244440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.989 [2024-11-15 11:22:13.244461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.989 [2024-11-15 11:22:13.244471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.989 [2024-11-15 11:22:13.244496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.989 [2024-11-15 11:22:13.244508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.989 [2024-11-15 11:22:13.244529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.989 [2024-11-15 11:22:13.244539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.989 [2024-11-15 11:22:13.244573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.989 [2024-11-15 11:22:13.244589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.989 [2024-11-15 11:22:13.244610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.989 [2024-11-15 11:22:13.244634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:35.989 [2024-11-15 11:22:13.244646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.989 [2024-11-15 11:22:13.244657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.989 [2024-11-15 11:22:13.244668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:35.989 [2024-11-15 11:22:13.244677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.989 [2024-11-15 11:22:13.244701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:35.989 [2024-11-15 11:22:13.244713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244722] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.989 [2024-11-15 11:22:13.244735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.989 [2024-11-15 11:22:13.244744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.989 [2024-11-15 11:22:13.244757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.989 [2024-11-15 11:22:13.244767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.989 [2024-11-15 11:22:13.244782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.989 [2024-11-15 11:22:13.244791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.989 [2024-11-15 11:22:13.244804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.989 [2024-11-15 11:22:13.244813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.989 [2024-11-15 11:22:13.244825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.989 [2024-11-15 11:22:13.244839] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.989 [2024-11-15 11:22:13.244854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.989 [2024-11-15 11:22:13.244865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:35.989 [2024-11-15 11:22:13.244881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:35.989 [2024-11-15 11:22:13.244892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:35.989 [2024-11-15 11:22:13.244904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:35.989 [2024-11-15 11:22:13.244914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:35.989 [2024-11-15 11:22:13.244926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:35.989 [2024-11-15 11:22:13.244936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:35.989 [2024-11-15 11:22:13.244952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:35.989 [2024-11-15 11:22:13.244962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:35.989 [2024-11-15 11:22:13.244978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:35.989 [2024-11-15 11:22:13.244989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:35.989 [2024-11-15 11:22:13.245001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:35.989 [2024-11-15 11:22:13.245012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:35.989 [2024-11-15 11:22:13.245024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:35.989 [2024-11-15 11:22:13.245034] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.989 [2024-11-15 11:22:13.245048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.989 [2024-11-15 11:22:13.245061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.989 [2024-11-15 11:22:13.245074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.989 [2024-11-15 11:22:13.245084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.989 [2024-11-15 11:22:13.245097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.989 [2024-11-15 11:22:13.245108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.989 [2024-11-15 11:22:13.245120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.989 [2024-11-15 11:22:13.245130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:18:35.989 [2024-11-15 11:22:13.245144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.989 [2024-11-15 11:22:13.245212] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:35.989 [2024-11-15 11:22:13.245230] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:40.173 [2024-11-15 11:22:17.391817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.392034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:40.173 [2024-11-15 11:22:17.392061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4153.334 ms 00:18:40.173 [2024-11-15 11:22:17.392075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.430854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.430913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.173 [2024-11-15 11:22:17.430929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.548 ms 00:18:40.173 [2024-11-15 11:22:17.430943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.431109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.431128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.173 [2024-11-15 11:22:17.431140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:40.173 [2024-11-15 11:22:17.431155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.490293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.490348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.173 [2024-11-15 11:22:17.490368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.174 ms 00:18:40.173 [2024-11-15 11:22:17.490382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.490440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.490453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.173 [2024-11-15 11:22:17.490465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.173 [2024-11-15 11:22:17.490478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.491014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.491036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.173 [2024-11-15 11:22:17.491047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:18:40.173 [2024-11-15 11:22:17.491064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.491193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.491211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.173 [2024-11-15 11:22:17.491222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:40.173 [2024-11-15 11:22:17.491237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.511817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.511866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.173 [2024-11-15 11:22:17.511882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.582 ms 00:18:40.173 [2024-11-15 11:22:17.511895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.173 [2024-11-15 11:22:17.524548] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:40.173 [2024-11-15 11:22:17.541109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.173 [2024-11-15 11:22:17.541188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.173 [2024-11-15 11:22:17.541208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.116 ms 00:18:40.173 [2024-11-15 11:22:17.541223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.431 [2024-11-15 11:22:17.633782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.431 [2024-11-15 11:22:17.633840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:40.431 [2024-11-15 11:22:17.633863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.645 ms 00:18:40.431 [2024-11-15 11:22:17.633875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.431 [2024-11-15 11:22:17.634142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.431 [2024-11-15 11:22:17.634168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.432 [2024-11-15 11:22:17.634187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:18:40.432 [2024-11-15 11:22:17.634198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.432 [2024-11-15 11:22:17.671458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.432 [2024-11-15 11:22:17.671672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:40.432 [2024-11-15 11:22:17.671702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.243 ms 00:18:40.432 [2024-11-15 11:22:17.671713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.432 [2024-11-15 11:22:17.707468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.432 [2024-11-15 11:22:17.707658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:40.432 [2024-11-15 11:22:17.707689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.752 ms 00:18:40.432 [2024-11-15 11:22:17.707700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.432 [2024-11-15 11:22:17.708420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.432 [2024-11-15 11:22:17.708445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.432 [2024-11-15 11:22:17.708460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:18:40.432 [2024-11-15 11:22:17.708472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.432 [2024-11-15 11:22:17.810696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.432 [2024-11-15 11:22:17.810754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:40.432 [2024-11-15 11:22:17.810778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.317 ms 00:18:40.432 [2024-11-15 11:22:17.810793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.690 [2024-11-15 11:22:17.849065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.690 [2024-11-15 11:22:17.849126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:40.690 [2024-11-15 11:22:17.849147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.219 ms 00:18:40.690 [2024-11-15 11:22:17.849159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.690 [2024-11-15 11:22:17.887008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.690 [2024-11-15 11:22:17.887082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:40.690 [2024-11-15 11:22:17.887104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.841 ms 00:18:40.690 [2024-11-15 11:22:17.887114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.690 [2024-11-15 11:22:17.924930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.690 [2024-11-15 11:22:17.925134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.690 [2024-11-15 11:22:17.925246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.820 ms 00:18:40.690 [2024-11-15 11:22:17.925286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.690 [2024-11-15 11:22:17.925362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.690 [2024-11-15 11:22:17.925402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.690 [2024-11-15 11:22:17.925496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.690 [2024-11-15 11:22:17.925532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.690 [2024-11-15 11:22:17.925710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.690 [2024-11-15 11:22:17.925727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.690 [2024-11-15 11:22:17.925747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:40.690 [2024-11-15 11:22:17.925757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.690 [2024-11-15 11:22:17.927000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4707.432 ms, result 0 00:18:40.690 { 00:18:40.690 "name": "ftl0", 00:18:40.690 "uuid": "133021ba-9021-42d4-961b-67bd3eac4ea9" 00:18:40.690 } 00:18:40.690 11:22:17 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:40.690 11:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:18:40.690 11:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:40.690 11:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:18:40.690 11:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:40.690 11:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:40.690 11:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:40.948 11:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:41.205 [ 00:18:41.205 { 00:18:41.205 "name": "ftl0", 00:18:41.205 "aliases": [ 00:18:41.205 "133021ba-9021-42d4-961b-67bd3eac4ea9" 00:18:41.205 ], 00:18:41.205 "product_name": "FTL disk", 00:18:41.205 "block_size": 4096, 00:18:41.205 "num_blocks": 20971520, 00:18:41.205 "uuid": "133021ba-9021-42d4-961b-67bd3eac4ea9", 00:18:41.205 "assigned_rate_limits": { 00:18:41.205 "rw_ios_per_sec": 0, 00:18:41.205 "rw_mbytes_per_sec": 0, 00:18:41.205 "r_mbytes_per_sec": 0, 00:18:41.205 "w_mbytes_per_sec": 0 00:18:41.205 }, 00:18:41.205 "claimed": false, 00:18:41.205 "zoned": false, 00:18:41.205 "supported_io_types": { 00:18:41.205 "read": true, 00:18:41.205 "write": true, 00:18:41.205 "unmap": true, 00:18:41.205 "flush": true, 00:18:41.205 "reset": false, 00:18:41.205 "nvme_admin": false, 00:18:41.205 "nvme_io": false, 00:18:41.205 "nvme_io_md": false, 00:18:41.205 "write_zeroes": true, 00:18:41.205 "zcopy": false, 00:18:41.205 "get_zone_info": false, 00:18:41.205 "zone_management": false, 00:18:41.205 "zone_append": false, 00:18:41.205 "compare": false, 00:18:41.205 "compare_and_write": false, 00:18:41.205 "abort": false, 00:18:41.205 "seek_hole": false, 00:18:41.205 "seek_data": false, 00:18:41.205 "copy": false, 00:18:41.205 "nvme_iov_md": false 00:18:41.205 }, 00:18:41.205 "driver_specific": { 00:18:41.205 "ftl": { 00:18:41.205 "base_bdev": "a8b2631c-4ac4-4b7e-8dc8-860a288d6631", 00:18:41.205 "cache": "nvc0n1p0" 00:18:41.205 } 00:18:41.205 } 00:18:41.205 } 00:18:41.205 ] 00:18:41.205 11:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:18:41.205 11:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:41.205 11:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:41.463 11:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:41.463 11:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:41.463 [2024-11-15 11:22:18.810068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.463 [2024-11-15 11:22:18.810136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:41.463 [2024-11-15 11:22:18.810153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.463 [2024-11-15 11:22:18.810167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.463 [2024-11-15 11:22:18.810205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.463 [2024-11-15 11:22:18.814474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.463 [2024-11-15 11:22:18.814510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:41.463 [2024-11-15 11:22:18.814526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.251 ms 00:18:41.463 [2024-11-15 11:22:18.814537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.463 [2024-11-15 11:22:18.815021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.463 [2024-11-15 11:22:18.815041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:41.463 [2024-11-15 11:22:18.815056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:18:41.463 [2024-11-15 11:22:18.815066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.464 [2024-11-15 11:22:18.817583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.464 [2024-11-15 11:22:18.817628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:41.464 [2024-11-15 11:22:18.817643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:18:41.464 [2024-11-15 11:22:18.817653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.464 [2024-11-15 11:22:18.822756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.464 [2024-11-15 11:22:18.822799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:41.464 [2024-11-15 11:22:18.822816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.072 ms 00:18:41.464 [2024-11-15 11:22:18.822831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.464 [2024-11-15 11:22:18.860310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.464 [2024-11-15 11:22:18.860504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:41.464 [2024-11-15 11:22:18.860534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.433 ms 00:18:41.464 [2024-11-15 11:22:18.860544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.722 [2024-11-15 11:22:18.883353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.722 [2024-11-15 11:22:18.883538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:41.722 [2024-11-15 11:22:18.883580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.756 ms 00:18:41.722 [2024-11-15 11:22:18.883592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.722 [2024-11-15 11:22:18.883854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.722 [2024-11-15 11:22:18.883869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:41.722 [2024-11-15 11:22:18.883883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:18:41.722 [2024-11-15 11:22:18.883894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.722 [2024-11-15 11:22:18.920663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.722 [2024-11-15 11:22:18.920723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:41.723 [2024-11-15 11:22:18.920744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.796 ms 00:18:41.723 [2024-11-15 11:22:18.920755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.723 [2024-11-15 11:22:18.957269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.723 [2024-11-15 11:22:18.957321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:41.723 [2024-11-15 11:22:18.957340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.513 ms 00:18:41.723 [2024-11-15 11:22:18.957351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.723 [2024-11-15 11:22:18.993459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.723 [2024-11-15 11:22:18.993509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:41.723 [2024-11-15 11:22:18.993527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.104 ms 00:18:41.723 [2024-11-15 11:22:18.993553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.723 [2024-11-15 11:22:19.029092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.723 [2024-11-15 11:22:19.029152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:41.723 [2024-11-15 11:22:19.029172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.451 ms 00:18:41.723 [2024-11-15 11:22:19.029198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.723 [2024-11-15 11:22:19.029251] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:41.723 [2024-11-15 11:22:19.029268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.029992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:41.723 [2024-11-15 11:22:19.030267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:41.724 [2024-11-15 11:22:19.030571] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:41.724 [2024-11-15 11:22:19.030585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 133021ba-9021-42d4-961b-67bd3eac4ea9 00:18:41.724 [2024-11-15 11:22:19.030597] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:41.724 [2024-11-15 11:22:19.030612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:41.724 [2024-11-15 11:22:19.030622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:41.724 [2024-11-15 11:22:19.030638] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:41.724 [2024-11-15 11:22:19.030649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:41.724 [2024-11-15 11:22:19.030661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:41.724 [2024-11-15 11:22:19.030671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:41.724 [2024-11-15 11:22:19.030683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:41.724 [2024-11-15 11:22:19.030692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:41.724 [2024-11-15 11:22:19.030707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.724 [2024-11-15 11:22:19.030717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:41.724 [2024-11-15 11:22:19.030730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:18:41.724 [2024-11-15 11:22:19.030740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.724 [2024-11-15 11:22:19.050385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.724 [2024-11-15 11:22:19.050426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:41.724 [2024-11-15 11:22:19.050443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.591 ms 00:18:41.724 [2024-11-15 11:22:19.050453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.724 [2024-11-15 11:22:19.051032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.724 [2024-11-15 11:22:19.051049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:41.724 [2024-11-15 11:22:19.051063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:18:41.724 [2024-11-15 11:22:19.051073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.982 [2024-11-15 11:22:19.121079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.982 [2024-11-15 11:22:19.121138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.983 [2024-11-15 11:22:19.121157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.121168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.121254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.121265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.983 [2024-11-15 11:22:19.121278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.121288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.121415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.121434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.983 [2024-11-15 11:22:19.121447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.121457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.121494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.121505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.983 [2024-11-15 11:22:19.121518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.121528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.254134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.254218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.983 [2024-11-15 11:22:19.254237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.254248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.356658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.356726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:41.983 [2024-11-15 11:22:19.356744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.356755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.356886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.356900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.983 [2024-11-15 11:22:19.356918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.356928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.357014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.357026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.983 [2024-11-15 11:22:19.357039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.357049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.357190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.357205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.983 [2024-11-15 11:22:19.357218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.357232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.357292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.357305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:41.983 [2024-11-15 11:22:19.357318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.357327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.357374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.357386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.983 [2024-11-15 11:22:19.357399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.357408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.357466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.983 [2024-11-15 11:22:19.357478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.983 [2024-11-15 11:22:19.357491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.983 [2024-11-15 11:22:19.357501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.983 [2024-11-15 11:22:19.357700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.492 ms, result 0 00:18:41.983 true 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74165 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 74165 ']' 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 74165 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74165 00:18:42.242 killing process with pid 74165 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74165' 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 74165 00:18:42.242 11:22:19 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 74165 00:18:47.578 11:22:24 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:47.578 11:22:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:47.578 11:22:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:47.578 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.578 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:47.578 11:22:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:47.579 11:22:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:47.579 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:47.579 fio-3.35 00:18:47.579 Starting 1 thread 00:18:54.164 00:18:54.164 test: (groupid=0, jobs=1): err= 0: pid=74390: Fri Nov 15 11:22:30 2024 00:18:54.164 read: IOPS=862, BW=57.3MiB/s (60.1MB/s)(255MiB/4443msec) 00:18:54.164 slat (nsec): min=4771, max=44202, avg=10321.82, stdev=3265.69 00:18:54.164 clat (usec): min=317, max=882, avg=519.12, stdev=63.38 00:18:54.164 lat (usec): min=328, max=891, avg=529.44, stdev=64.29 00:18:54.164 clat percentiles (usec): 00:18:54.164 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 474], 00:18:54.164 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 523], 60.00th=[ 553], 00:18:54.164 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 619], 00:18:54.164 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 734], 99.95th=[ 832], 00:18:54.164 | 99.99th=[ 881] 00:18:54.164 write: IOPS=868, BW=57.7MiB/s (60.5MB/s)(256MiB/4438msec); 0 zone resets 00:18:54.164 slat (usec): min=15, max=100, avg=27.31, stdev= 6.62 00:18:54.164 clat (usec): min=391, max=1112, avg=586.70, stdev=81.23 00:18:54.164 lat (usec): min=419, max=1144, avg=614.01, stdev=81.95 00:18:54.164 clat percentiles (usec): 00:18:54.164 | 1.00th=[ 408], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 510], 00:18:54.164 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 594], 00:18:54.164 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 668], 95.00th=[ 676], 00:18:54.164 | 99.00th=[ 873], 99.50th=[ 947], 99.90th=[ 1074], 99.95th=[ 1106], 00:18:54.164 | 99.99th=[ 1106] 00:18:54.164 bw ( KiB/s): min=55080, max=63104, per=100.00%, avg=59213.12, stdev=2281.54, samples=8 00:18:54.164 iops : min= 810, max= 928, avg=870.75, stdev=33.55, samples=8 00:18:54.164 lat (usec) : 500=30.97%, 750=68.11%, 1000=0.75% 00:18:54.164 lat (msec) : 2=0.17% 00:18:54.164 cpu : usr=99.17%, sys=0.09%, ctx=10, majf=0, minf=1170 00:18:54.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.164 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.164 00:18:54.164 Run status group 0 (all jobs): 00:18:54.164 READ: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=255MiB (267MB), run=4443-4443msec 00:18:54.164 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=256MiB (269MB), run=4438-4438msec 00:18:55.121 ----------------------------------------------------- 00:18:55.121 Suppressions used: 00:18:55.121 count bytes template 00:18:55.121 1 5 /usr/src/fio/parse.c 00:18:55.121 1 8 libtcmalloc_minimal.so 00:18:55.121 1 904 libcrypto.so 00:18:55.121 ----------------------------------------------------- 00:18:55.121 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:55.121 11:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:55.380 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:55.380 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:55.380 fio-3.35 00:18:55.380 Starting 2 threads 00:19:21.927 00:19:21.927 first_half: (groupid=0, jobs=1): err= 0: pid=74509: Fri Nov 15 11:22:58 2024 00:19:21.927 read: IOPS=2635, BW=10.3MiB/s (10.8MB/s)(255MiB/24754msec) 00:19:21.927 slat (nsec): min=3565, max=62882, avg=6202.11, stdev=2024.66 00:19:21.927 clat (usec): min=997, max=270523, avg=36411.89, stdev=18815.95 00:19:21.927 lat (usec): min=1004, max=270529, avg=36418.09, stdev=18816.14 00:19:21.927 clat percentiles (msec): 00:19:21.927 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:19:21.927 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:19:21.927 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 46], 00:19:21.927 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 211], 99.95th=[ 234], 00:19:21.927 | 99.99th=[ 262] 00:19:21.927 write: IOPS=3117, BW=12.2MiB/s (12.8MB/s)(256MiB/21024msec); 0 zone resets 00:19:21.927 slat (usec): min=4, max=798, avg= 8.15, stdev= 7.66 00:19:21.927 clat (usec): min=458, max=87862, avg=12052.15, stdev=20191.05 00:19:21.927 lat (usec): min=467, max=87871, avg=12060.30, stdev=20191.32 00:19:21.927 clat percentiles (usec): 00:19:21.927 | 1.00th=[ 955], 5.00th=[ 1205], 10.00th=[ 1434], 20.00th=[ 1844], 00:19:21.927 | 30.00th=[ 3097], 40.00th=[ 4817], 50.00th=[ 5735], 60.00th=[ 6521], 00:19:21.927 | 70.00th=[ 7635], 80.00th=[11076], 90.00th=[32900], 95.00th=[76022], 00:19:21.927 | 99.00th=[81265], 99.50th=[82314], 99.90th=[85459], 99.95th=[86508], 00:19:21.927 | 99.99th=[87557] 00:19:21.927 bw ( KiB/s): min= 216, max=41216, per=77.87%, avg=19418.07, stdev=12231.53, samples=27 00:19:21.927 iops : min= 54, max=10304, avg=4854.52, stdev=3057.88, samples=27 00:19:21.927 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.62% 00:19:21.927 lat (msec) : 2=11.17%, 4=6.02%, 10=22.00%, 20=6.32%, 50=47.48% 00:19:21.927 lat (msec) : 100=5.19%, 250=1.10%, 500=0.01% 00:19:21.927 cpu : usr=99.24%, sys=0.17%, ctx=38, majf=0, minf=5575 00:19:21.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.927 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.927 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.927 second_half: (groupid=0, jobs=1): err= 0: pid=74510: Fri Nov 15 11:22:58 2024 00:19:21.927 read: IOPS=2645, BW=10.3MiB/s (10.8MB/s)(255MiB/24638msec) 00:19:21.927 slat (nsec): min=3578, max=31153, avg=6124.38, stdev=1963.41 00:19:21.927 clat (usec): min=976, max=277533, avg=36943.73, stdev=18067.67 00:19:21.927 lat (usec): min=981, max=277540, avg=36949.86, stdev=18067.86 00:19:21.927 clat percentiles (msec): 00:19:21.927 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:19:21.927 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:19:21.927 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 40], 95.00th=[ 48], 00:19:21.927 | 99.00th=[ 142], 99.50th=[ 159], 99.90th=[ 186], 99.95th=[ 188], 00:19:21.927 | 99.99th=[ 271] 00:19:21.927 write: IOPS=3435, BW=13.4MiB/s (14.1MB/s)(256MiB/19078msec); 0 zone resets 00:19:21.927 slat (usec): min=4, max=109, avg= 7.75, stdev= 3.74 00:19:21.927 clat (usec): min=458, max=87743, avg=11354.90, stdev=19998.00 00:19:21.927 lat (usec): min=468, max=87750, avg=11362.65, stdev=19998.07 00:19:21.927 clat percentiles (usec): 00:19:21.927 | 1.00th=[ 1004], 5.00th=[ 1287], 10.00th=[ 1483], 20.00th=[ 1729], 00:19:21.927 | 30.00th=[ 1975], 40.00th=[ 3556], 50.00th=[ 4948], 60.00th=[ 6128], 00:19:21.927 | 70.00th=[ 7701], 80.00th=[11076], 90.00th=[21890], 95.00th=[76022], 00:19:21.927 | 99.00th=[81265], 99.50th=[83362], 99.90th=[85459], 99.95th=[86508], 00:19:21.927 | 99.99th=[86508] 00:19:21.927 bw ( KiB/s): min= 32, max=54104, per=91.41%, avg=22795.13, stdev=14525.02, samples=23 00:19:21.927 iops : min= 8, max=13526, avg=5698.78, stdev=3631.26, samples=23 00:19:21.928 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.39% 00:19:21.928 lat (msec) : 2=14.98%, 4=6.98%, 10=16.20%, 20=7.45%, 50=47.57% 00:19:21.928 lat (msec) : 100=5.10%, 250=1.23%, 500=0.01% 00:19:21.928 cpu : usr=99.24%, sys=0.20%, ctx=47, majf=0, minf=5552 00:19:21.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:21.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.928 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.928 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.928 00:19:21.928 Run status group 0 (all jobs): 00:19:21.928 READ: bw=20.6MiB/s (21.6MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=509MiB (534MB), run=24638-24754msec 00:19:21.928 WRITE: bw=24.4MiB/s (25.5MB/s), 12.2MiB/s-13.4MiB/s (12.8MB/s-14.1MB/s), io=512MiB (537MB), run=19078-21024msec 00:19:23.829 ----------------------------------------------------- 00:19:23.829 Suppressions used: 00:19:23.829 count bytes template 00:19:23.829 2 10 /usr/src/fio/parse.c 00:19:23.829 3 288 /usr/src/fio/iolog.c 00:19:23.829 1 8 libtcmalloc_minimal.so 00:19:23.829 1 904 libcrypto.so 00:19:23.829 ----------------------------------------------------- 00:19:23.829 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.829 11:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:24.088 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:24.088 fio-3.35 00:19:24.088 Starting 1 thread 00:19:42.192 00:19:42.192 test: (groupid=0, jobs=1): err= 0: pid=74834: Fri Nov 15 11:23:17 2024 00:19:42.192 read: IOPS=6813, BW=26.6MiB/s (27.9MB/s)(255MiB/9569msec) 00:19:42.192 slat (usec): min=3, max=121, avg= 7.19, stdev= 3.14 00:19:42.192 clat (usec): min=754, max=31363, avg=18774.01, stdev=1951.35 00:19:42.192 lat (usec): min=770, max=31369, avg=18781.21, stdev=1952.57 00:19:42.192 clat percentiles (usec): 00:19:42.192 | 1.00th=[15795], 5.00th=[16057], 10.00th=[16188], 20.00th=[16450], 00:19:42.192 | 30.00th=[16909], 40.00th=[19530], 50.00th=[19792], 60.00th=[19792], 00:19:42.192 | 70.00th=[20055], 80.00th=[20055], 90.00th=[20317], 95.00th=[20579], 00:19:42.192 | 99.00th=[23462], 99.50th=[23987], 99.90th=[28181], 99.95th=[28967], 00:19:42.192 | 99.99th=[30802] 00:19:42.192 write: IOPS=12.4k, BW=48.6MiB/s (50.9MB/s)(256MiB/5272msec); 0 zone resets 00:19:42.192 slat (usec): min=4, max=712, avg= 8.35, stdev= 8.38 00:19:42.192 clat (usec): min=603, max=67127, avg=10247.39, stdev=13152.12 00:19:42.192 lat (usec): min=612, max=67143, avg=10255.73, stdev=13152.22 00:19:42.192 clat percentiles (usec): 00:19:42.192 | 1.00th=[ 971], 5.00th=[ 1172], 10.00th=[ 1336], 20.00th=[ 1549], 00:19:42.192 | 30.00th=[ 1795], 40.00th=[ 2376], 50.00th=[ 6456], 60.00th=[ 7504], 00:19:42.192 | 70.00th=[ 8455], 80.00th=[10159], 90.00th=[36439], 95.00th=[39584], 00:19:42.192 | 99.00th=[50594], 99.50th=[53740], 99.90th=[57934], 99.95th=[59507], 00:19:42.192 | 99.99th=[61604] 00:19:42.192 bw ( KiB/s): min=18520, max=69568, per=95.85%, avg=47662.55, stdev=13578.55, samples=11 00:19:42.192 iops : min= 4630, max=17392, avg=11915.64, stdev=3394.64, samples=11 00:19:42.192 lat (usec) : 750=0.03%, 1000=0.62% 00:19:42.192 lat (msec) : 2=17.29%, 4=3.15%, 10=18.73%, 20=37.14%, 50=22.50% 00:19:42.192 lat (msec) : 100=0.55% 00:19:42.192 cpu : usr=98.57%, sys=0.56%, ctx=23, majf=0, minf=5565 00:19:42.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:42.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.192 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:42.192 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:42.192 00:19:42.192 Run status group 0 (all jobs): 00:19:42.192 READ: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=255MiB (267MB), run=9569-9569msec 00:19:42.192 WRITE: bw=48.6MiB/s (50.9MB/s), 48.6MiB/s-48.6MiB/s (50.9MB/s-50.9MB/s), io=256MiB (268MB), run=5272-5272msec 00:19:42.450 ----------------------------------------------------- 00:19:42.450 Suppressions used: 00:19:42.450 count bytes template 00:19:42.450 1 5 /usr/src/fio/parse.c 00:19:42.450 2 192 /usr/src/fio/iolog.c 00:19:42.451 1 8 libtcmalloc_minimal.so 00:19:42.451 1 904 libcrypto.so 00:19:42.451 ----------------------------------------------------- 00:19:42.451 00:19:42.451 11:23:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:42.451 11:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.451 11:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:42.710 Remove shared memory files 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57775 /dev/shm/spdk_tgt_trace.pid73059 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:42.710 ************************************ 00:19:42.710 END TEST ftl_fio_basic 00:19:42.710 ************************************ 00:19:42.710 00:19:42.710 real 1m10.979s 00:19:42.710 user 2m31.029s 00:19:42.710 sys 0m3.997s 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:42.710 11:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:42.710 11:23:19 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:42.710 11:23:19 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:42.710 11:23:19 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:42.710 11:23:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:42.710 ************************************ 00:19:42.710 START TEST ftl_bdevperf 00:19:42.710 ************************************ 00:19:42.710 11:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:42.710 * Looking for test storage... 00:19:42.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.710 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:42.710 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:42.710 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.970 --rc genhtml_branch_coverage=1 00:19:42.970 --rc genhtml_function_coverage=1 00:19:42.970 --rc genhtml_legend=1 00:19:42.970 --rc geninfo_all_blocks=1 00:19:42.970 --rc geninfo_unexecuted_blocks=1 00:19:42.970 00:19:42.970 ' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.970 --rc genhtml_branch_coverage=1 00:19:42.970 --rc genhtml_function_coverage=1 00:19:42.970 --rc genhtml_legend=1 00:19:42.970 --rc geninfo_all_blocks=1 00:19:42.970 --rc geninfo_unexecuted_blocks=1 00:19:42.970 00:19:42.970 ' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.970 --rc genhtml_branch_coverage=1 00:19:42.970 --rc genhtml_function_coverage=1 00:19:42.970 --rc genhtml_legend=1 00:19:42.970 --rc geninfo_all_blocks=1 00:19:42.970 --rc geninfo_unexecuted_blocks=1 00:19:42.970 00:19:42.970 ' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.970 --rc genhtml_branch_coverage=1 00:19:42.970 --rc genhtml_function_coverage=1 00:19:42.970 --rc genhtml_legend=1 00:19:42.970 --rc geninfo_all_blocks=1 00:19:42.970 --rc geninfo_unexecuted_blocks=1 00:19:42.970 00:19:42.970 ' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75089 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75089 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 75089 ']' 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:42.970 11:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:42.970 [2024-11-15 11:23:20.316006] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:19:42.970 [2024-11-15 11:23:20.316338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75089 ] 00:19:43.229 [2024-11-15 11:23:20.499040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.229 [2024-11-15 11:23:20.609452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:43.796 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:44.054 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:44.313 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:44.313 { 00:19:44.313 "name": "nvme0n1", 00:19:44.313 "aliases": [ 00:19:44.313 "bd10bbf6-d68b-4d96-864d-2ffc9761ddfc" 00:19:44.313 ], 00:19:44.313 "product_name": "NVMe disk", 00:19:44.313 "block_size": 4096, 00:19:44.313 "num_blocks": 1310720, 00:19:44.313 "uuid": "bd10bbf6-d68b-4d96-864d-2ffc9761ddfc", 00:19:44.313 "numa_id": -1, 00:19:44.313 "assigned_rate_limits": { 00:19:44.313 "rw_ios_per_sec": 0, 00:19:44.313 "rw_mbytes_per_sec": 0, 00:19:44.313 "r_mbytes_per_sec": 0, 00:19:44.313 "w_mbytes_per_sec": 0 00:19:44.313 }, 00:19:44.313 "claimed": true, 00:19:44.313 "claim_type": "read_many_write_one", 00:19:44.313 "zoned": false, 00:19:44.313 "supported_io_types": { 00:19:44.313 "read": true, 00:19:44.313 "write": true, 00:19:44.313 "unmap": true, 00:19:44.313 "flush": true, 00:19:44.313 "reset": true, 00:19:44.313 "nvme_admin": true, 00:19:44.313 "nvme_io": true, 00:19:44.313 "nvme_io_md": false, 00:19:44.313 "write_zeroes": true, 00:19:44.313 "zcopy": false, 00:19:44.313 "get_zone_info": false, 00:19:44.313 "zone_management": false, 00:19:44.313 "zone_append": false, 00:19:44.313 "compare": true, 00:19:44.313 "compare_and_write": false, 00:19:44.313 "abort": true, 00:19:44.313 "seek_hole": false, 00:19:44.313 "seek_data": false, 00:19:44.313 "copy": true, 00:19:44.313 "nvme_iov_md": false 00:19:44.313 }, 00:19:44.313 "driver_specific": { 00:19:44.313 "nvme": [ 00:19:44.313 { 00:19:44.313 "pci_address": "0000:00:11.0", 00:19:44.313 "trid": { 00:19:44.313 "trtype": "PCIe", 00:19:44.313 "traddr": "0000:00:11.0" 00:19:44.313 }, 00:19:44.313 "ctrlr_data": { 00:19:44.313 "cntlid": 0, 00:19:44.313 "vendor_id": "0x1b36", 00:19:44.313 "model_number": "QEMU NVMe Ctrl", 00:19:44.313 "serial_number": "12341", 00:19:44.313 "firmware_revision": "8.0.0", 00:19:44.313 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:44.313 "oacs": { 00:19:44.313 "security": 0, 00:19:44.313 "format": 1, 00:19:44.313 "firmware": 0, 00:19:44.313 "ns_manage": 1 00:19:44.313 }, 00:19:44.313 "multi_ctrlr": false, 00:19:44.313 "ana_reporting": false 00:19:44.313 }, 00:19:44.313 "vs": { 00:19:44.313 "nvme_version": "1.4" 00:19:44.313 }, 00:19:44.313 "ns_data": { 00:19:44.313 "id": 1, 00:19:44.313 "can_share": false 00:19:44.313 } 00:19:44.313 } 00:19:44.313 ], 00:19:44.313 "mp_policy": "active_passive" 00:19:44.313 } 00:19:44.313 } 00:19:44.313 ]' 00:19:44.313 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:44.313 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:44.313 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=dfa84a0c-f871-490f-a753-9532dd659c6a 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:44.572 11:23:21 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dfa84a0c-f871-490f-a753-9532dd659c6a 00:19:44.831 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:45.091 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=35b440cc-0ce5-4f77-a90b-913f558ace48 00:19:45.091 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 35b440cc-0ce5-4f77-a90b-913f558ace48 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:45.349 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:45.609 { 00:19:45.609 "name": "ee6d28e9-fcaa-423b-bff5-82820a512f8b", 00:19:45.609 "aliases": [ 00:19:45.609 "lvs/nvme0n1p0" 00:19:45.609 ], 00:19:45.609 "product_name": "Logical Volume", 00:19:45.609 "block_size": 4096, 00:19:45.609 "num_blocks": 26476544, 00:19:45.609 "uuid": "ee6d28e9-fcaa-423b-bff5-82820a512f8b", 00:19:45.609 "assigned_rate_limits": { 00:19:45.609 "rw_ios_per_sec": 0, 00:19:45.609 "rw_mbytes_per_sec": 0, 00:19:45.609 "r_mbytes_per_sec": 0, 00:19:45.609 "w_mbytes_per_sec": 0 00:19:45.609 }, 00:19:45.609 "claimed": false, 00:19:45.609 "zoned": false, 00:19:45.609 "supported_io_types": { 00:19:45.609 "read": true, 00:19:45.609 "write": true, 00:19:45.609 "unmap": true, 00:19:45.609 "flush": false, 00:19:45.609 "reset": true, 00:19:45.609 "nvme_admin": false, 00:19:45.609 "nvme_io": false, 00:19:45.609 "nvme_io_md": false, 00:19:45.609 "write_zeroes": true, 00:19:45.609 "zcopy": false, 00:19:45.609 "get_zone_info": false, 00:19:45.609 "zone_management": false, 00:19:45.609 "zone_append": false, 00:19:45.609 "compare": false, 00:19:45.609 "compare_and_write": false, 00:19:45.609 "abort": false, 00:19:45.609 "seek_hole": true, 00:19:45.609 "seek_data": true, 00:19:45.609 "copy": false, 00:19:45.609 "nvme_iov_md": false 00:19:45.609 }, 00:19:45.609 "driver_specific": { 00:19:45.609 "lvol": { 00:19:45.609 "lvol_store_uuid": "35b440cc-0ce5-4f77-a90b-913f558ace48", 00:19:45.609 "base_bdev": "nvme0n1", 00:19:45.609 "thin_provision": true, 00:19:45.609 "num_allocated_clusters": 0, 00:19:45.609 "snapshot": false, 00:19:45.609 "clone": false, 00:19:45.609 "esnap_clone": false 00:19:45.609 } 00:19:45.609 } 00:19:45.609 } 00:19:45.609 ]' 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.609 11:23:22 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:45.869 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:46.127 { 00:19:46.127 "name": "ee6d28e9-fcaa-423b-bff5-82820a512f8b", 00:19:46.127 "aliases": [ 00:19:46.127 "lvs/nvme0n1p0" 00:19:46.127 ], 00:19:46.127 "product_name": "Logical Volume", 00:19:46.127 "block_size": 4096, 00:19:46.127 "num_blocks": 26476544, 00:19:46.127 "uuid": "ee6d28e9-fcaa-423b-bff5-82820a512f8b", 00:19:46.127 "assigned_rate_limits": { 00:19:46.127 "rw_ios_per_sec": 0, 00:19:46.127 "rw_mbytes_per_sec": 0, 00:19:46.127 "r_mbytes_per_sec": 0, 00:19:46.127 "w_mbytes_per_sec": 0 00:19:46.127 }, 00:19:46.127 "claimed": false, 00:19:46.127 "zoned": false, 00:19:46.127 "supported_io_types": { 00:19:46.127 "read": true, 00:19:46.127 "write": true, 00:19:46.127 "unmap": true, 00:19:46.127 "flush": false, 00:19:46.127 "reset": true, 00:19:46.127 "nvme_admin": false, 00:19:46.127 "nvme_io": false, 00:19:46.127 "nvme_io_md": false, 00:19:46.127 "write_zeroes": true, 00:19:46.127 "zcopy": false, 00:19:46.127 "get_zone_info": false, 00:19:46.127 "zone_management": false, 00:19:46.127 "zone_append": false, 00:19:46.127 "compare": false, 00:19:46.127 "compare_and_write": false, 00:19:46.127 "abort": false, 00:19:46.127 "seek_hole": true, 00:19:46.127 "seek_data": true, 00:19:46.127 "copy": false, 00:19:46.127 "nvme_iov_md": false 00:19:46.127 }, 00:19:46.127 "driver_specific": { 00:19:46.127 "lvol": { 00:19:46.127 "lvol_store_uuid": "35b440cc-0ce5-4f77-a90b-913f558ace48", 00:19:46.127 "base_bdev": "nvme0n1", 00:19:46.127 "thin_provision": true, 00:19:46.127 "num_allocated_clusters": 0, 00:19:46.127 "snapshot": false, 00:19:46.127 "clone": false, 00:19:46.127 "esnap_clone": false 00:19:46.127 } 00:19:46.127 } 00:19:46.127 } 00:19:46.127 ]' 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:46.127 11:23:23 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.385 11:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:46.385 11:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:46.385 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:46.385 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:46.385 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:46.385 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:46.385 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee6d28e9-fcaa-423b-bff5-82820a512f8b 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:46.643 { 00:19:46.643 "name": "ee6d28e9-fcaa-423b-bff5-82820a512f8b", 00:19:46.643 "aliases": [ 00:19:46.643 "lvs/nvme0n1p0" 00:19:46.643 ], 00:19:46.643 "product_name": "Logical Volume", 00:19:46.643 "block_size": 4096, 00:19:46.643 "num_blocks": 26476544, 00:19:46.643 "uuid": "ee6d28e9-fcaa-423b-bff5-82820a512f8b", 00:19:46.643 "assigned_rate_limits": { 00:19:46.643 "rw_ios_per_sec": 0, 00:19:46.643 "rw_mbytes_per_sec": 0, 00:19:46.643 "r_mbytes_per_sec": 0, 00:19:46.643 "w_mbytes_per_sec": 0 00:19:46.643 }, 00:19:46.643 "claimed": false, 00:19:46.643 "zoned": false, 00:19:46.643 "supported_io_types": { 00:19:46.643 "read": true, 00:19:46.643 "write": true, 00:19:46.643 "unmap": true, 00:19:46.643 "flush": false, 00:19:46.643 "reset": true, 00:19:46.643 "nvme_admin": false, 00:19:46.643 "nvme_io": false, 00:19:46.643 "nvme_io_md": false, 00:19:46.643 "write_zeroes": true, 00:19:46.643 "zcopy": false, 00:19:46.643 "get_zone_info": false, 00:19:46.643 "zone_management": false, 00:19:46.643 "zone_append": false, 00:19:46.643 "compare": false, 00:19:46.643 "compare_and_write": false, 00:19:46.643 "abort": false, 00:19:46.643 "seek_hole": true, 00:19:46.643 "seek_data": true, 00:19:46.643 "copy": false, 00:19:46.643 "nvme_iov_md": false 00:19:46.643 }, 00:19:46.643 "driver_specific": { 00:19:46.643 "lvol": { 00:19:46.643 "lvol_store_uuid": "35b440cc-0ce5-4f77-a90b-913f558ace48", 00:19:46.643 "base_bdev": "nvme0n1", 00:19:46.643 "thin_provision": true, 00:19:46.643 "num_allocated_clusters": 0, 00:19:46.643 "snapshot": false, 00:19:46.643 "clone": false, 00:19:46.643 "esnap_clone": false 00:19:46.643 } 00:19:46.643 } 00:19:46.643 } 00:19:46.643 ]' 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:46.643 11:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ee6d28e9-fcaa-423b-bff5-82820a512f8b -c nvc0n1p0 --l2p_dram_limit 20 00:19:46.905 [2024-11-15 11:23:24.184943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.185010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.905 [2024-11-15 11:23:24.185028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:46.905 [2024-11-15 11:23:24.185041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.905 [2024-11-15 11:23:24.185115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.185133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.905 [2024-11-15 11:23:24.185144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:46.905 [2024-11-15 11:23:24.185157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.905 [2024-11-15 11:23:24.185178] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.905 [2024-11-15 11:23:24.186218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.905 [2024-11-15 11:23:24.186248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.186262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.905 [2024-11-15 11:23:24.186274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:19:46.905 [2024-11-15 11:23:24.186286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.905 [2024-11-15 11:23:24.186329] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f8e7fd98-0612-4efe-b907-3c7ad7764823 00:19:46.905 [2024-11-15 11:23:24.187742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.187885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:46.905 [2024-11-15 11:23:24.187913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:46.905 [2024-11-15 11:23:24.187928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.905 [2024-11-15 11:23:24.195374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.195506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.905 [2024-11-15 11:23:24.195530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.371 ms 00:19:46.905 [2024-11-15 11:23:24.195541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.905 [2024-11-15 11:23:24.195665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.195680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.905 [2024-11-15 11:23:24.195698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:46.905 [2024-11-15 11:23:24.195708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.905 [2024-11-15 11:23:24.195794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.195807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.905 [2024-11-15 11:23:24.195821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:46.905 [2024-11-15 11:23:24.195832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.905 [2024-11-15 11:23:24.195857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.905 [2024-11-15 11:23:24.200750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.905 [2024-11-15 11:23:24.200784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.905 [2024-11-15 11:23:24.200796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:19:46.906 [2024-11-15 11:23:24.200814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.906 [2024-11-15 11:23:24.200846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.906 [2024-11-15 11:23:24.200859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.906 [2024-11-15 11:23:24.200870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:46.906 [2024-11-15 11:23:24.200883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.906 [2024-11-15 11:23:24.200915] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:46.906 [2024-11-15 11:23:24.201051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.906 [2024-11-15 11:23:24.201065] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.906 [2024-11-15 11:23:24.201082] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.906 [2024-11-15 11:23:24.201095] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201110] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201121] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.906 [2024-11-15 11:23:24.201134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.906 [2024-11-15 11:23:24.201144] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.906 [2024-11-15 11:23:24.201156] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.906 [2024-11-15 11:23:24.201167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.906 [2024-11-15 11:23:24.201183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.906 [2024-11-15 11:23:24.201194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:19:46.906 [2024-11-15 11:23:24.201206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.906 [2024-11-15 11:23:24.201276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.906 [2024-11-15 11:23:24.201291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.906 [2024-11-15 11:23:24.201302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:46.906 [2024-11-15 11:23:24.201316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.906 [2024-11-15 11:23:24.201395] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.906 [2024-11-15 11:23:24.201409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.906 [2024-11-15 11:23:24.201422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.906 [2024-11-15 11:23:24.201457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.906 [2024-11-15 11:23:24.201488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.906 [2024-11-15 11:23:24.201509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.906 [2024-11-15 11:23:24.201521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.906 [2024-11-15 11:23:24.201534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.906 [2024-11-15 11:23:24.201568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.906 [2024-11-15 11:23:24.201578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.906 [2024-11-15 11:23:24.201593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.906 [2024-11-15 11:23:24.201614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.906 [2024-11-15 11:23:24.201647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.906 [2024-11-15 11:23:24.201679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.906 [2024-11-15 11:23:24.201709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.906 [2024-11-15 11:23:24.201741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.906 [2024-11-15 11:23:24.201773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.906 [2024-11-15 11:23:24.201793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.906 [2024-11-15 11:23:24.201805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.906 [2024-11-15 11:23:24.201814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.906 [2024-11-15 11:23:24.201825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.906 [2024-11-15 11:23:24.201834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.906 [2024-11-15 11:23:24.201845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.906 [2024-11-15 11:23:24.201866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.906 [2024-11-15 11:23:24.201875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201886] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.906 [2024-11-15 11:23:24.201898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.906 [2024-11-15 11:23:24.201911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.906 [2024-11-15 11:23:24.201920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.906 [2024-11-15 11:23:24.201937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.906 [2024-11-15 11:23:24.201947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.906 [2024-11-15 11:23:24.201958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.906 [2024-11-15 11:23:24.201968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.906 [2024-11-15 11:23:24.201980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.906 [2024-11-15 11:23:24.201989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.906 [2024-11-15 11:23:24.202005] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.906 [2024-11-15 11:23:24.202017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.906 [2024-11-15 11:23:24.202030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.906 [2024-11-15 11:23:24.202041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.906 [2024-11-15 11:23:24.202054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.906 [2024-11-15 11:23:24.202064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.906 [2024-11-15 11:23:24.202076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.906 [2024-11-15 11:23:24.202087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.906 [2024-11-15 11:23:24.202099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.906 [2024-11-15 11:23:24.202109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.906 [2024-11-15 11:23:24.202125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.906 [2024-11-15 11:23:24.202135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.907 [2024-11-15 11:23:24.202155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.907 [2024-11-15 11:23:24.202166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.907 [2024-11-15 11:23:24.202179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.907 [2024-11-15 11:23:24.202190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.907 [2024-11-15 11:23:24.202202] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.907 [2024-11-15 11:23:24.202214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.907 [2024-11-15 11:23:24.202229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.907 [2024-11-15 11:23:24.202240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.907 [2024-11-15 11:23:24.202253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.907 [2024-11-15 11:23:24.202263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.907 [2024-11-15 11:23:24.202276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.907 [2024-11-15 11:23:24.202291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.907 [2024-11-15 11:23:24.202305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:19:46.907 [2024-11-15 11:23:24.202314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.907 [2024-11-15 11:23:24.202355] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:46.907 [2024-11-15 11:23:24.202367] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:50.214 [2024-11-15 11:23:27.595866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.214 [2024-11-15 11:23:27.595943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:50.214 [2024-11-15 11:23:27.595975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3399.010 ms 00:19:50.214 [2024-11-15 11:23:27.595990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.473 [2024-11-15 11:23:27.635116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.473 [2024-11-15 11:23:27.635189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.473 [2024-11-15 11:23:27.635214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.901 ms 00:19:50.473 [2024-11-15 11:23:27.635230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.473 [2024-11-15 11:23:27.635412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.473 [2024-11-15 11:23:27.635430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.473 [2024-11-15 11:23:27.635452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:50.473 [2024-11-15 11:23:27.635468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.473 [2024-11-15 11:23:27.692167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.473 [2024-11-15 11:23:27.692226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.473 [2024-11-15 11:23:27.692247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.708 ms 00:19:50.473 [2024-11-15 11:23:27.692258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.473 [2024-11-15 11:23:27.692309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.473 [2024-11-15 11:23:27.692324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.473 [2024-11-15 11:23:27.692338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.473 [2024-11-15 11:23:27.692348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.473 [2024-11-15 11:23:27.692870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.473 [2024-11-15 11:23:27.692885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.473 [2024-11-15 11:23:27.692899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:19:50.473 [2024-11-15 11:23:27.692909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.473 [2024-11-15 11:23:27.693025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.474 [2024-11-15 11:23:27.693038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.474 [2024-11-15 11:23:27.693054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:50.474 [2024-11-15 11:23:27.693065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.474 [2024-11-15 11:23:27.712949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.474 [2024-11-15 11:23:27.712998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.474 [2024-11-15 11:23:27.713018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.894 ms 00:19:50.474 [2024-11-15 11:23:27.713029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.474 [2024-11-15 11:23:27.726159] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:50.474 [2024-11-15 11:23:27.732263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.474 [2024-11-15 11:23:27.732302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.474 [2024-11-15 11:23:27.732318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.150 ms 00:19:50.474 [2024-11-15 11:23:27.732331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.474 [2024-11-15 11:23:27.831370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.474 [2024-11-15 11:23:27.831605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:50.474 [2024-11-15 11:23:27.831631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.154 ms 00:19:50.474 [2024-11-15 11:23:27.831646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.474 [2024-11-15 11:23:27.831840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.474 [2024-11-15 11:23:27.831861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.474 [2024-11-15 11:23:27.831873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:50.474 [2024-11-15 11:23:27.831889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.474 [2024-11-15 11:23:27.870166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.474 [2024-11-15 11:23:27.870229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:50.474 [2024-11-15 11:23:27.870246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.277 ms 00:19:50.474 [2024-11-15 11:23:27.870260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:27.907823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:27.907889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:50.732 [2024-11-15 11:23:27.907908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.573 ms 00:19:50.732 [2024-11-15 11:23:27.907921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:27.908633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:27.908658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.732 [2024-11-15 11:23:27.908671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:19:50.732 [2024-11-15 11:23:27.908683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:28.015346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:28.015426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:50.732 [2024-11-15 11:23:28.015444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.769 ms 00:19:50.732 [2024-11-15 11:23:28.015458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:28.053545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:28.053782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:50.732 [2024-11-15 11:23:28.053812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.053 ms 00:19:50.732 [2024-11-15 11:23:28.053826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:28.091524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:28.091592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:50.732 [2024-11-15 11:23:28.091610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.691 ms 00:19:50.732 [2024-11-15 11:23:28.091622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:28.129569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:28.129635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.732 [2024-11-15 11:23:28.129653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.961 ms 00:19:50.732 [2024-11-15 11:23:28.129666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:28.129719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:28.129737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.732 [2024-11-15 11:23:28.129749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:50.732 [2024-11-15 11:23:28.129762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:28.129874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.732 [2024-11-15 11:23:28.129890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.732 [2024-11-15 11:23:28.129901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:50.732 [2024-11-15 11:23:28.129914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.732 [2024-11-15 11:23:28.131062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3952.106 ms, result 0 00:19:50.990 { 00:19:50.990 "name": "ftl0", 00:19:50.990 "uuid": "f8e7fd98-0612-4efe-b907-3c7ad7764823" 00:19:50.990 } 00:19:50.990 11:23:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:50.990 11:23:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:50.990 11:23:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:50.990 11:23:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:51.249 [2024-11-15 11:23:28.431051] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:51.249 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:51.249 Zero copy mechanism will not be used. 00:19:51.249 Running I/O for 4 seconds... 00:19:53.125 1626.00 IOPS, 107.98 MiB/s [2024-11-15T11:23:31.461Z] 1644.00 IOPS, 109.17 MiB/s [2024-11-15T11:23:32.837Z] 1628.67 IOPS, 108.15 MiB/s [2024-11-15T11:23:32.837Z] 1616.75 IOPS, 107.36 MiB/s 00:19:55.436 Latency(us) 00:19:55.436 [2024-11-15T11:23:32.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.436 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:55.436 ftl0 : 4.00 1616.38 107.34 0.00 0.00 648.32 220.43 2526.69 00:19:55.436 [2024-11-15T11:23:32.837Z] =================================================================================================================== 00:19:55.436 [2024-11-15T11:23:32.837Z] Total : 1616.38 107.34 0.00 0.00 648.32 220.43 2526.69 00:19:55.436 [2024-11-15 11:23:32.435364] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:55.436 { 00:19:55.436 "results": [ 00:19:55.436 { 00:19:55.436 "job": "ftl0", 00:19:55.436 "core_mask": "0x1", 00:19:55.436 "workload": "randwrite", 00:19:55.436 "status": "finished", 00:19:55.436 "queue_depth": 1, 00:19:55.436 "io_size": 69632, 00:19:55.436 "runtime": 4.001533, 00:19:55.436 "iops": 1616.3805221648804, 00:19:55.436 "mibps": 107.33776905001159, 00:19:55.436 "io_failed": 0, 00:19:55.436 "io_timeout": 0, 00:19:55.436 "avg_latency_us": 648.3212255329295, 00:19:55.436 "min_latency_us": 220.4273092369478, 00:19:55.436 "max_latency_us": 2526.6891566265062 00:19:55.436 } 00:19:55.436 ], 00:19:55.436 "core_count": 1 00:19:55.436 } 00:19:55.436 11:23:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:55.436 [2024-11-15 11:23:32.556170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:55.436 Running I/O for 4 seconds... 00:19:57.335 10922.00 IOPS, 42.66 MiB/s [2024-11-15T11:23:35.670Z] 10908.00 IOPS, 42.61 MiB/s [2024-11-15T11:23:36.608Z] 10977.67 IOPS, 42.88 MiB/s [2024-11-15T11:23:36.608Z] 10952.50 IOPS, 42.78 MiB/s 00:19:59.207 Latency(us) 00:19:59.207 [2024-11-15T11:23:36.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.207 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:59.207 ftl0 : 4.02 10926.86 42.68 0.00 0.00 11682.67 245.10 36426.44 00:19:59.207 [2024-11-15T11:23:36.608Z] =================================================================================================================== 00:19:59.207 [2024-11-15T11:23:36.608Z] Total : 10926.86 42.68 0.00 0.00 11682.67 0.00 36426.44 00:19:59.207 { 00:19:59.207 "results": [ 00:19:59.207 { 00:19:59.207 "job": "ftl0", 00:19:59.207 "core_mask": "0x1", 00:19:59.207 "workload": "randwrite", 00:19:59.207 "status": "finished", 00:19:59.207 "queue_depth": 128, 00:19:59.207 "io_size": 4096, 00:19:59.207 "runtime": 4.021101, 00:19:59.207 "iops": 10926.858091850963, 00:19:59.207 "mibps": 42.683039421292825, 00:19:59.207 "io_failed": 0, 00:19:59.207 "io_timeout": 0, 00:19:59.207 "avg_latency_us": 11682.67030201922, 00:19:59.207 "min_latency_us": 245.1020080321285, 00:19:59.207 "max_latency_us": 36426.43534136546 00:19:59.207 } 00:19:59.207 ], 00:19:59.207 "core_count": 1 00:19:59.207 } 00:19:59.207 [2024-11-15 11:23:36.581276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:59.207 11:23:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:59.466 [2024-11-15 11:23:36.705949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:59.466 Running I/O for 4 seconds... 00:20:01.383 7842.00 IOPS, 30.63 MiB/s [2024-11-15T11:23:39.720Z] 7841.00 IOPS, 30.63 MiB/s [2024-11-15T11:23:41.097Z] 7884.33 IOPS, 30.80 MiB/s [2024-11-15T11:23:41.097Z] 7862.50 IOPS, 30.71 MiB/s 00:20:03.696 Latency(us) 00:20:03.696 [2024-11-15T11:23:41.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.696 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:03.696 Verification LBA range: start 0x0 length 0x1400000 00:20:03.696 ftl0 : 4.01 7875.16 30.76 0.00 0.00 16205.50 273.07 33268.07 00:20:03.696 [2024-11-15T11:23:41.097Z] =================================================================================================================== 00:20:03.696 [2024-11-15T11:23:41.097Z] Total : 7875.16 30.76 0.00 0.00 16205.50 0.00 33268.07 00:20:03.696 [2024-11-15 11:23:40.728235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:03.696 { 00:20:03.696 "results": [ 00:20:03.696 { 00:20:03.696 "job": "ftl0", 00:20:03.696 "core_mask": "0x1", 00:20:03.696 "workload": "verify", 00:20:03.696 "status": "finished", 00:20:03.696 "verify_range": { 00:20:03.696 "start": 0, 00:20:03.696 "length": 20971520 00:20:03.696 }, 00:20:03.696 "queue_depth": 128, 00:20:03.696 "io_size": 4096, 00:20:03.696 "runtime": 4.00957, 00:20:03.696 "iops": 7875.158682851278, 00:20:03.696 "mibps": 30.762338604887805, 00:20:03.696 "io_failed": 0, 00:20:03.696 "io_timeout": 0, 00:20:03.696 "avg_latency_us": 16205.501575392018, 00:20:03.696 "min_latency_us": 273.06666666666666, 00:20:03.696 "max_latency_us": 33268.07389558233 00:20:03.696 } 00:20:03.696 ], 00:20:03.696 "core_count": 1 00:20:03.696 } 00:20:03.696 11:23:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:03.696 [2024-11-15 11:23:40.939818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.696 [2024-11-15 11:23:40.940055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:03.696 [2024-11-15 11:23:40.940082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.696 [2024-11-15 11:23:40.940097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.696 [2024-11-15 11:23:40.940138] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:03.696 [2024-11-15 11:23:40.944275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.696 [2024-11-15 11:23:40.944306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:03.696 [2024-11-15 11:23:40.944321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.121 ms 00:20:03.696 [2024-11-15 11:23:40.944331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.696 [2024-11-15 11:23:40.946352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.696 [2024-11-15 11:23:40.946389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.696 [2024-11-15 11:23:40.946412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.993 ms 00:20:03.696 [2024-11-15 11:23:40.946423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.157698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.157771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.956 [2024-11-15 11:23:41.157810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 211.582 ms 00:20:03.956 [2024-11-15 11:23:41.157824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.163055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.163100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:03.956 [2024-11-15 11:23:41.163119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.190 ms 00:20:03.956 [2024-11-15 11:23:41.163136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.199293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.199345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.956 [2024-11-15 11:23:41.199364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.124 ms 00:20:03.956 [2024-11-15 11:23:41.199375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.220553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.220616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.956 [2024-11-15 11:23:41.220636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.159 ms 00:20:03.956 [2024-11-15 11:23:41.220647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.220792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.220806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.956 [2024-11-15 11:23:41.220823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:03.956 [2024-11-15 11:23:41.220834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.258027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.258066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:03.956 [2024-11-15 11:23:41.258083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.231 ms 00:20:03.956 [2024-11-15 11:23:41.258093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.291832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.291982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:03.956 [2024-11-15 11:23:41.292010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.748 ms 00:20:03.956 [2024-11-15 11:23:41.292021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.956 [2024-11-15 11:23:41.329039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.956 [2024-11-15 11:23:41.329286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.956 [2024-11-15 11:23:41.329318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.011 ms 00:20:03.956 [2024-11-15 11:23:41.329329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.216 [2024-11-15 11:23:41.367598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.216 [2024-11-15 11:23:41.367830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:04.216 [2024-11-15 11:23:41.367864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.129 ms 00:20:04.216 [2024-11-15 11:23:41.367875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.216 [2024-11-15 11:23:41.367987] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:04.216 [2024-11-15 11:23:41.368006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.368854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:04.216 [2024-11-15 11:23:41.369522] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:04.216 [2024-11-15 11:23:41.369534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8e7fd98-0612-4efe-b907-3c7ad7764823 00:20:04.216 [2024-11-15 11:23:41.369549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:04.216 [2024-11-15 11:23:41.369570] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:04.216 [2024-11-15 11:23:41.369581] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:04.216 [2024-11-15 11:23:41.369593] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:04.217 [2024-11-15 11:23:41.369603] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:04.217 [2024-11-15 11:23:41.369616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:04.217 [2024-11-15 11:23:41.369626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:04.217 [2024-11-15 11:23:41.369641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:04.217 [2024-11-15 11:23:41.369650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:04.217 [2024-11-15 11:23:41.369663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.217 [2024-11-15 11:23:41.369673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:04.217 [2024-11-15 11:23:41.369687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:20:04.217 [2024-11-15 11:23:41.369697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.217 [2024-11-15 11:23:41.389587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.217 [2024-11-15 11:23:41.389638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:04.217 [2024-11-15 11:23:41.389658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.845 ms 00:20:04.217 [2024-11-15 11:23:41.389668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.217 [2024-11-15 11:23:41.390233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.217 [2024-11-15 11:23:41.390244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:04.217 [2024-11-15 11:23:41.390258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:20:04.217 [2024-11-15 11:23:41.390268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.217 [2024-11-15 11:23:41.444859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.217 [2024-11-15 11:23:41.444931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.217 [2024-11-15 11:23:41.444960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.217 [2024-11-15 11:23:41.444977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.217 [2024-11-15 11:23:41.445067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.217 [2024-11-15 11:23:41.445085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.217 [2024-11-15 11:23:41.445102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.217 [2024-11-15 11:23:41.445113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.217 [2024-11-15 11:23:41.445244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.217 [2024-11-15 11:23:41.445262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.217 [2024-11-15 11:23:41.445276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.217 [2024-11-15 11:23:41.445286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.217 [2024-11-15 11:23:41.445310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.217 [2024-11-15 11:23:41.445321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.217 [2024-11-15 11:23:41.445342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.217 [2024-11-15 11:23:41.445352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.217 [2024-11-15 11:23:41.565539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.217 [2024-11-15 11:23:41.565612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.217 [2024-11-15 11:23:41.565637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.217 [2024-11-15 11:23:41.565651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.662219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.475 [2024-11-15 11:23:41.662268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.475 [2024-11-15 11:23:41.662286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.475 [2024-11-15 11:23:41.662297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.662424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.475 [2024-11-15 11:23:41.662437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.475 [2024-11-15 11:23:41.662450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.475 [2024-11-15 11:23:41.662461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.662509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.475 [2024-11-15 11:23:41.662521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.475 [2024-11-15 11:23:41.662535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.475 [2024-11-15 11:23:41.662545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.662694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.475 [2024-11-15 11:23:41.662712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.475 [2024-11-15 11:23:41.662728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.475 [2024-11-15 11:23:41.662739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.662780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.475 [2024-11-15 11:23:41.662793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:04.475 [2024-11-15 11:23:41.662806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.475 [2024-11-15 11:23:41.662816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.662859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.475 [2024-11-15 11:23:41.662873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.475 [2024-11-15 11:23:41.662899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.475 [2024-11-15 11:23:41.662915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.662965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.475 [2024-11-15 11:23:41.662986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.475 [2024-11-15 11:23:41.663004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.475 [2024-11-15 11:23:41.663021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.475 [2024-11-15 11:23:41.663170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 724.474 ms, result 0 00:20:04.475 true 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75089 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 75089 ']' 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 75089 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75089 00:20:04.475 killing process with pid 75089 00:20:04.475 Received shutdown signal, test time was about 4.000000 seconds 00:20:04.475 00:20:04.475 Latency(us) 00:20:04.475 [2024-11-15T11:23:41.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.475 [2024-11-15T11:23:41.876Z] =================================================================================================================== 00:20:04.475 [2024-11-15T11:23:41.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75089' 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 75089 00:20:04.475 11:23:41 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 75089 00:20:05.852 Remove shared memory files 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:05.852 ************************************ 00:20:05.852 END TEST ftl_bdevperf 00:20:05.852 ************************************ 00:20:05.852 00:20:05.852 real 0m23.102s 00:20:05.852 user 0m25.705s 00:20:05.852 sys 0m1.229s 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:05.852 11:23:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:05.852 11:23:43 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:05.852 11:23:43 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:05.852 11:23:43 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:05.852 11:23:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:05.852 ************************************ 00:20:05.852 START TEST ftl_trim 00:20:05.852 ************************************ 00:20:05.852 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:06.112 * Looking for test storage... 00:20:06.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.112 11:23:43 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:06.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.112 --rc genhtml_branch_coverage=1 00:20:06.112 --rc genhtml_function_coverage=1 00:20:06.112 --rc genhtml_legend=1 00:20:06.112 --rc geninfo_all_blocks=1 00:20:06.112 --rc geninfo_unexecuted_blocks=1 00:20:06.112 00:20:06.112 ' 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:06.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.112 --rc genhtml_branch_coverage=1 00:20:06.112 --rc genhtml_function_coverage=1 00:20:06.112 --rc genhtml_legend=1 00:20:06.112 --rc geninfo_all_blocks=1 00:20:06.112 --rc geninfo_unexecuted_blocks=1 00:20:06.112 00:20:06.112 ' 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:06.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.112 --rc genhtml_branch_coverage=1 00:20:06.112 --rc genhtml_function_coverage=1 00:20:06.112 --rc genhtml_legend=1 00:20:06.112 --rc geninfo_all_blocks=1 00:20:06.112 --rc geninfo_unexecuted_blocks=1 00:20:06.112 00:20:06.112 ' 00:20:06.112 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:06.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.112 --rc genhtml_branch_coverage=1 00:20:06.112 --rc genhtml_function_coverage=1 00:20:06.112 --rc genhtml_legend=1 00:20:06.112 --rc geninfo_all_blocks=1 00:20:06.112 --rc geninfo_unexecuted_blocks=1 00:20:06.112 00:20:06.112 ' 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:06.112 11:23:43 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75441 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75441 00:20:06.113 11:23:43 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:06.113 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75441 ']' 00:20:06.113 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.113 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:06.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.113 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.113 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:06.113 11:23:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:06.371 [2024-11-15 11:23:43.522627] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:20:06.371 [2024-11-15 11:23:43.522755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75441 ] 00:20:06.371 [2024-11-15 11:23:43.703482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:06.629 [2024-11-15 11:23:43.831256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.629 [2024-11-15 11:23:43.831401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.629 [2024-11-15 11:23:43.831435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.565 11:23:44 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.565 11:23:44 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:07.565 11:23:44 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:07.565 11:23:44 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:07.565 11:23:44 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:07.565 11:23:44 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:07.565 11:23:44 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:07.565 11:23:44 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:07.824 11:23:45 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:07.824 11:23:45 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:07.824 11:23:45 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:07.824 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:20:07.824 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:07.824 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:07.824 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:07.824 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:08.082 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:08.082 { 00:20:08.082 "name": "nvme0n1", 00:20:08.082 "aliases": [ 00:20:08.082 "e3ccfd74-ea67-4456-9c2e-5db9b91b5626" 00:20:08.082 ], 00:20:08.082 "product_name": "NVMe disk", 00:20:08.082 "block_size": 4096, 00:20:08.082 "num_blocks": 1310720, 00:20:08.082 "uuid": "e3ccfd74-ea67-4456-9c2e-5db9b91b5626", 00:20:08.082 "numa_id": -1, 00:20:08.082 "assigned_rate_limits": { 00:20:08.082 "rw_ios_per_sec": 0, 00:20:08.082 "rw_mbytes_per_sec": 0, 00:20:08.082 "r_mbytes_per_sec": 0, 00:20:08.082 "w_mbytes_per_sec": 0 00:20:08.082 }, 00:20:08.082 "claimed": true, 00:20:08.082 "claim_type": "read_many_write_one", 00:20:08.082 "zoned": false, 00:20:08.082 "supported_io_types": { 00:20:08.082 "read": true, 00:20:08.082 "write": true, 00:20:08.082 "unmap": true, 00:20:08.082 "flush": true, 00:20:08.082 "reset": true, 00:20:08.082 "nvme_admin": true, 00:20:08.082 "nvme_io": true, 00:20:08.082 "nvme_io_md": false, 00:20:08.082 "write_zeroes": true, 00:20:08.082 "zcopy": false, 00:20:08.082 "get_zone_info": false, 00:20:08.082 "zone_management": false, 00:20:08.082 "zone_append": false, 00:20:08.082 "compare": true, 00:20:08.082 "compare_and_write": false, 00:20:08.082 "abort": true, 00:20:08.082 "seek_hole": false, 00:20:08.082 "seek_data": false, 00:20:08.082 "copy": true, 00:20:08.082 "nvme_iov_md": false 00:20:08.082 }, 00:20:08.082 "driver_specific": { 00:20:08.082 "nvme": [ 00:20:08.082 { 00:20:08.082 "pci_address": "0000:00:11.0", 00:20:08.082 "trid": { 00:20:08.082 "trtype": "PCIe", 00:20:08.082 "traddr": "0000:00:11.0" 00:20:08.082 }, 00:20:08.082 "ctrlr_data": { 00:20:08.082 "cntlid": 0, 00:20:08.082 "vendor_id": "0x1b36", 00:20:08.082 "model_number": "QEMU NVMe Ctrl", 00:20:08.082 "serial_number": "12341", 00:20:08.082 "firmware_revision": "8.0.0", 00:20:08.082 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:08.082 "oacs": { 00:20:08.082 "security": 0, 00:20:08.082 "format": 1, 00:20:08.082 "firmware": 0, 00:20:08.082 "ns_manage": 1 00:20:08.082 }, 00:20:08.082 "multi_ctrlr": false, 00:20:08.082 "ana_reporting": false 00:20:08.082 }, 00:20:08.082 "vs": { 00:20:08.082 "nvme_version": "1.4" 00:20:08.082 }, 00:20:08.082 "ns_data": { 00:20:08.082 "id": 1, 00:20:08.082 "can_share": false 00:20:08.082 } 00:20:08.082 } 00:20:08.082 ], 00:20:08.082 "mp_policy": "active_passive" 00:20:08.082 } 00:20:08.082 } 00:20:08.082 ]' 00:20:08.082 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:08.082 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:08.082 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:08.082 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:20:08.082 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:20:08.082 11:23:45 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:20:08.082 11:23:45 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:08.082 11:23:45 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:08.082 11:23:45 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:08.082 11:23:45 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:08.082 11:23:45 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:08.340 11:23:45 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=35b440cc-0ce5-4f77-a90b-913f558ace48 00:20:08.340 11:23:45 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:08.340 11:23:45 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35b440cc-0ce5-4f77-a90b-913f558ace48 00:20:08.599 11:23:45 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:08.599 11:23:45 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=298bb846-5292-4a6f-a3da-913e44be28a5 00:20:08.599 11:23:45 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 298bb846-5292-4a6f-a3da-913e44be28a5 00:20:08.858 11:23:46 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:08.858 11:23:46 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:08.858 11:23:46 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:08.858 11:23:46 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:08.858 11:23:46 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:08.858 11:23:46 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:08.858 11:23:46 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:08.858 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:08.858 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:08.858 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:08.858 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:08.858 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:09.117 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:09.117 { 00:20:09.117 "name": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:09.117 "aliases": [ 00:20:09.117 "lvs/nvme0n1p0" 00:20:09.117 ], 00:20:09.117 "product_name": "Logical Volume", 00:20:09.117 "block_size": 4096, 00:20:09.117 "num_blocks": 26476544, 00:20:09.117 "uuid": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:09.117 "assigned_rate_limits": { 00:20:09.117 "rw_ios_per_sec": 0, 00:20:09.117 "rw_mbytes_per_sec": 0, 00:20:09.117 "r_mbytes_per_sec": 0, 00:20:09.117 "w_mbytes_per_sec": 0 00:20:09.117 }, 00:20:09.117 "claimed": false, 00:20:09.117 "zoned": false, 00:20:09.117 "supported_io_types": { 00:20:09.117 "read": true, 00:20:09.117 "write": true, 00:20:09.117 "unmap": true, 00:20:09.117 "flush": false, 00:20:09.117 "reset": true, 00:20:09.117 "nvme_admin": false, 00:20:09.117 "nvme_io": false, 00:20:09.117 "nvme_io_md": false, 00:20:09.117 "write_zeroes": true, 00:20:09.117 "zcopy": false, 00:20:09.117 "get_zone_info": false, 00:20:09.117 "zone_management": false, 00:20:09.117 "zone_append": false, 00:20:09.117 "compare": false, 00:20:09.117 "compare_and_write": false, 00:20:09.117 "abort": false, 00:20:09.117 "seek_hole": true, 00:20:09.117 "seek_data": true, 00:20:09.117 "copy": false, 00:20:09.117 "nvme_iov_md": false 00:20:09.117 }, 00:20:09.117 "driver_specific": { 00:20:09.117 "lvol": { 00:20:09.117 "lvol_store_uuid": "298bb846-5292-4a6f-a3da-913e44be28a5", 00:20:09.117 "base_bdev": "nvme0n1", 00:20:09.117 "thin_provision": true, 00:20:09.117 "num_allocated_clusters": 0, 00:20:09.117 "snapshot": false, 00:20:09.117 "clone": false, 00:20:09.117 "esnap_clone": false 00:20:09.117 } 00:20:09.117 } 00:20:09.117 } 00:20:09.117 ]' 00:20:09.117 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:09.117 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:09.117 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:09.118 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:09.118 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:09.118 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:20:09.118 11:23:46 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:09.118 11:23:46 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:09.118 11:23:46 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:09.377 11:23:46 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:09.377 11:23:46 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:09.377 11:23:46 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:09.377 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:09.377 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:09.377 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:09.377 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:09.377 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:09.634 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:09.634 { 00:20:09.634 "name": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:09.634 "aliases": [ 00:20:09.634 "lvs/nvme0n1p0" 00:20:09.634 ], 00:20:09.634 "product_name": "Logical Volume", 00:20:09.634 "block_size": 4096, 00:20:09.634 "num_blocks": 26476544, 00:20:09.634 "uuid": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:09.634 "assigned_rate_limits": { 00:20:09.634 "rw_ios_per_sec": 0, 00:20:09.634 "rw_mbytes_per_sec": 0, 00:20:09.634 "r_mbytes_per_sec": 0, 00:20:09.634 "w_mbytes_per_sec": 0 00:20:09.634 }, 00:20:09.634 "claimed": false, 00:20:09.635 "zoned": false, 00:20:09.635 "supported_io_types": { 00:20:09.635 "read": true, 00:20:09.635 "write": true, 00:20:09.635 "unmap": true, 00:20:09.635 "flush": false, 00:20:09.635 "reset": true, 00:20:09.635 "nvme_admin": false, 00:20:09.635 "nvme_io": false, 00:20:09.635 "nvme_io_md": false, 00:20:09.635 "write_zeroes": true, 00:20:09.635 "zcopy": false, 00:20:09.635 "get_zone_info": false, 00:20:09.635 "zone_management": false, 00:20:09.635 "zone_append": false, 00:20:09.635 "compare": false, 00:20:09.635 "compare_and_write": false, 00:20:09.635 "abort": false, 00:20:09.635 "seek_hole": true, 00:20:09.635 "seek_data": true, 00:20:09.635 "copy": false, 00:20:09.635 "nvme_iov_md": false 00:20:09.635 }, 00:20:09.635 "driver_specific": { 00:20:09.635 "lvol": { 00:20:09.635 "lvol_store_uuid": "298bb846-5292-4a6f-a3da-913e44be28a5", 00:20:09.635 "base_bdev": "nvme0n1", 00:20:09.635 "thin_provision": true, 00:20:09.635 "num_allocated_clusters": 0, 00:20:09.635 "snapshot": false, 00:20:09.635 "clone": false, 00:20:09.635 "esnap_clone": false 00:20:09.635 } 00:20:09.635 } 00:20:09.635 } 00:20:09.635 ]' 00:20:09.635 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:09.635 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:09.635 11:23:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:09.635 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:09.635 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:09.635 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:20:09.635 11:23:47 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:09.635 11:23:47 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:09.894 11:23:47 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:09.894 11:23:47 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:09.894 11:23:47 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:09.894 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:09.894 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:09.894 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:09.894 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:09.894 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e 00:20:10.153 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:10.153 { 00:20:10.153 "name": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:10.153 "aliases": [ 00:20:10.153 "lvs/nvme0n1p0" 00:20:10.153 ], 00:20:10.153 "product_name": "Logical Volume", 00:20:10.153 "block_size": 4096, 00:20:10.153 "num_blocks": 26476544, 00:20:10.153 "uuid": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:10.153 "assigned_rate_limits": { 00:20:10.153 "rw_ios_per_sec": 0, 00:20:10.153 "rw_mbytes_per_sec": 0, 00:20:10.153 "r_mbytes_per_sec": 0, 00:20:10.153 "w_mbytes_per_sec": 0 00:20:10.153 }, 00:20:10.153 "claimed": false, 00:20:10.153 "zoned": false, 00:20:10.153 "supported_io_types": { 00:20:10.153 "read": true, 00:20:10.153 "write": true, 00:20:10.153 "unmap": true, 00:20:10.153 "flush": false, 00:20:10.153 "reset": true, 00:20:10.153 "nvme_admin": false, 00:20:10.153 "nvme_io": false, 00:20:10.153 "nvme_io_md": false, 00:20:10.153 "write_zeroes": true, 00:20:10.153 "zcopy": false, 00:20:10.153 "get_zone_info": false, 00:20:10.153 "zone_management": false, 00:20:10.153 "zone_append": false, 00:20:10.153 "compare": false, 00:20:10.153 "compare_and_write": false, 00:20:10.153 "abort": false, 00:20:10.153 "seek_hole": true, 00:20:10.153 "seek_data": true, 00:20:10.153 "copy": false, 00:20:10.153 "nvme_iov_md": false 00:20:10.153 }, 00:20:10.153 "driver_specific": { 00:20:10.153 "lvol": { 00:20:10.153 "lvol_store_uuid": "298bb846-5292-4a6f-a3da-913e44be28a5", 00:20:10.153 "base_bdev": "nvme0n1", 00:20:10.153 "thin_provision": true, 00:20:10.153 "num_allocated_clusters": 0, 00:20:10.153 "snapshot": false, 00:20:10.153 "clone": false, 00:20:10.153 "esnap_clone": false 00:20:10.153 } 00:20:10.153 } 00:20:10.153 } 00:20:10.153 ]' 00:20:10.153 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:10.153 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:10.153 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:10.153 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:10.153 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:10.153 11:23:47 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:20:10.153 11:23:47 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:10.153 11:23:47 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8a72390b-53ed-4b5c-a6c9-5602ae49ca3e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:10.414 [2024-11-15 11:23:47.720900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.720967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:10.414 [2024-11-15 11:23:47.720989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:10.414 [2024-11-15 11:23:47.720999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.724299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.724444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.414 [2024-11-15 11:23:47.724470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:20:10.414 [2024-11-15 11:23:47.724481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.724682] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:10.414 [2024-11-15 11:23:47.725683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:10.414 [2024-11-15 11:23:47.725719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.725730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.414 [2024-11-15 11:23:47.725744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:20:10.414 [2024-11-15 11:23:47.725754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.725864] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:20:10.414 [2024-11-15 11:23:47.727263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.727299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:10.414 [2024-11-15 11:23:47.727311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:10.414 [2024-11-15 11:23:47.727324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.734755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.734802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.414 [2024-11-15 11:23:47.734820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.359 ms 00:20:10.414 [2024-11-15 11:23:47.734833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.734984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.735002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.414 [2024-11-15 11:23:47.735012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:10.414 [2024-11-15 11:23:47.735037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.735076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.735090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:10.414 [2024-11-15 11:23:47.735101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:10.414 [2024-11-15 11:23:47.735116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.735152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:10.414 [2024-11-15 11:23:47.740217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.740250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.414 [2024-11-15 11:23:47.740266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:20:10.414 [2024-11-15 11:23:47.740277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.740339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.740351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:10.414 [2024-11-15 11:23:47.740364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:10.414 [2024-11-15 11:23:47.740390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.740423] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:10.414 [2024-11-15 11:23:47.740548] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:10.414 [2024-11-15 11:23:47.740588] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:10.414 [2024-11-15 11:23:47.740603] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:10.414 [2024-11-15 11:23:47.740619] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:10.414 [2024-11-15 11:23:47.740631] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:10.414 [2024-11-15 11:23:47.740645] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:10.414 [2024-11-15 11:23:47.740655] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:10.414 [2024-11-15 11:23:47.740667] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:10.414 [2024-11-15 11:23:47.740679] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:10.414 [2024-11-15 11:23:47.740692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.740703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:10.414 [2024-11-15 11:23:47.740715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:10.414 [2024-11-15 11:23:47.740725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.740811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.414 [2024-11-15 11:23:47.740822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:10.414 [2024-11-15 11:23:47.740835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:10.414 [2024-11-15 11:23:47.740845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.414 [2024-11-15 11:23:47.740962] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:10.414 [2024-11-15 11:23:47.740974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:10.414 [2024-11-15 11:23:47.740987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.414 [2024-11-15 11:23:47.740997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.414 [2024-11-15 11:23:47.741010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:10.414 [2024-11-15 11:23:47.741019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:10.414 [2024-11-15 11:23:47.741031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:10.414 [2024-11-15 11:23:47.741041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:10.414 [2024-11-15 11:23:47.741052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:10.414 [2024-11-15 11:23:47.741062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.414 [2024-11-15 11:23:47.741073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:10.414 [2024-11-15 11:23:47.741082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:10.414 [2024-11-15 11:23:47.741094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.414 [2024-11-15 11:23:47.741104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:10.414 [2024-11-15 11:23:47.741116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:10.414 [2024-11-15 11:23:47.741125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.414 [2024-11-15 11:23:47.741139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:10.414 [2024-11-15 11:23:47.741148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:10.414 [2024-11-15 11:23:47.741160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:10.415 [2024-11-15 11:23:47.741182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.415 [2024-11-15 11:23:47.741203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:10.415 [2024-11-15 11:23:47.741212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.415 [2024-11-15 11:23:47.741234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:10.415 [2024-11-15 11:23:47.741245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.415 [2024-11-15 11:23:47.741266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:10.415 [2024-11-15 11:23:47.741275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.415 [2024-11-15 11:23:47.741295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:10.415 [2024-11-15 11:23:47.741310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.415 [2024-11-15 11:23:47.741330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:10.415 [2024-11-15 11:23:47.741340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:10.415 [2024-11-15 11:23:47.741352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.415 [2024-11-15 11:23:47.741361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:10.415 [2024-11-15 11:23:47.741373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:10.415 [2024-11-15 11:23:47.741382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:10.415 [2024-11-15 11:23:47.741403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:10.415 [2024-11-15 11:23:47.741414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741423] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:10.415 [2024-11-15 11:23:47.741435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:10.415 [2024-11-15 11:23:47.741447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.415 [2024-11-15 11:23:47.741460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.415 [2024-11-15 11:23:47.741470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:10.415 [2024-11-15 11:23:47.741485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:10.415 [2024-11-15 11:23:47.741495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:10.415 [2024-11-15 11:23:47.741507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:10.415 [2024-11-15 11:23:47.741516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:10.415 [2024-11-15 11:23:47.741528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:10.415 [2024-11-15 11:23:47.741541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:10.415 [2024-11-15 11:23:47.741590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.415 [2024-11-15 11:23:47.741606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:10.415 [2024-11-15 11:23:47.741619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:10.415 [2024-11-15 11:23:47.741629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:10.415 [2024-11-15 11:23:47.741642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:10.415 [2024-11-15 11:23:47.741653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:10.415 [2024-11-15 11:23:47.741665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:10.415 [2024-11-15 11:23:47.741675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:10.415 [2024-11-15 11:23:47.741688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:10.415 [2024-11-15 11:23:47.741697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:10.415 [2024-11-15 11:23:47.741712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:10.415 [2024-11-15 11:23:47.741723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:10.415 [2024-11-15 11:23:47.741735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:10.415 [2024-11-15 11:23:47.741745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:10.415 [2024-11-15 11:23:47.741758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:10.415 [2024-11-15 11:23:47.741768] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:10.415 [2024-11-15 11:23:47.741786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.415 [2024-11-15 11:23:47.741798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:10.415 [2024-11-15 11:23:47.741811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:10.415 [2024-11-15 11:23:47.741822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:10.415 [2024-11-15 11:23:47.741834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:10.415 [2024-11-15 11:23:47.741845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.415 [2024-11-15 11:23:47.741858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:10.415 [2024-11-15 11:23:47.741870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:20:10.415 [2024-11-15 11:23:47.741883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.415 [2024-11-15 11:23:47.741958] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:10.415 [2024-11-15 11:23:47.741976] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:13.705 [2024-11-15 11:23:51.028743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.705 [2024-11-15 11:23:51.028802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:13.705 [2024-11-15 11:23:51.028818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3292.118 ms 00:20:13.705 [2024-11-15 11:23:51.028832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.705 [2024-11-15 11:23:51.068626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.705 [2024-11-15 11:23:51.068822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.705 [2024-11-15 11:23:51.068848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.482 ms 00:20:13.705 [2024-11-15 11:23:51.068865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.705 [2024-11-15 11:23:51.069016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.705 [2024-11-15 11:23:51.069035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.705 [2024-11-15 11:23:51.069050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:13.705 [2024-11-15 11:23:51.069068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.129675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.129724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.963 [2024-11-15 11:23:51.129743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.643 ms 00:20:13.963 [2024-11-15 11:23:51.129764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.129882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.129906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.963 [2024-11-15 11:23:51.129920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.963 [2024-11-15 11:23:51.129936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.130401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.130423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.963 [2024-11-15 11:23:51.130435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:20:13.963 [2024-11-15 11:23:51.130447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.130584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.130598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.963 [2024-11-15 11:23:51.130610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:13.963 [2024-11-15 11:23:51.130626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.152420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.152605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.963 [2024-11-15 11:23:51.152630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.782 ms 00:20:13.963 [2024-11-15 11:23:51.152651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.165635] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:13.963 [2024-11-15 11:23:51.182240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.182295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.963 [2024-11-15 11:23:51.182314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.498 ms 00:20:13.963 [2024-11-15 11:23:51.182325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.282159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.282225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:13.963 [2024-11-15 11:23:51.282244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.871 ms 00:20:13.963 [2024-11-15 11:23:51.282256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.963 [2024-11-15 11:23:51.282477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.963 [2024-11-15 11:23:51.282492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.963 [2024-11-15 11:23:51.282509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:13.963 [2024-11-15 11:23:51.282518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.964 [2024-11-15 11:23:51.318350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.964 [2024-11-15 11:23:51.318502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:13.964 [2024-11-15 11:23:51.318532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.849 ms 00:20:13.964 [2024-11-15 11:23:51.318546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.964 [2024-11-15 11:23:51.353898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.964 [2024-11-15 11:23:51.353935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:13.964 [2024-11-15 11:23:51.353952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.311 ms 00:20:13.964 [2024-11-15 11:23:51.353962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.964 [2024-11-15 11:23:51.354778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.964 [2024-11-15 11:23:51.354805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.964 [2024-11-15 11:23:51.354822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:20:13.964 [2024-11-15 11:23:51.354835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.223 [2024-11-15 11:23:51.453617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.223 [2024-11-15 11:23:51.453678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:14.223 [2024-11-15 11:23:51.453701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.892 ms 00:20:14.223 [2024-11-15 11:23:51.453715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.223 [2024-11-15 11:23:51.491574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.223 [2024-11-15 11:23:51.491732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:14.223 [2024-11-15 11:23:51.491762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.806 ms 00:20:14.223 [2024-11-15 11:23:51.491777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.223 [2024-11-15 11:23:51.527553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.223 [2024-11-15 11:23:51.527600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:14.223 [2024-11-15 11:23:51.527619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.739 ms 00:20:14.223 [2024-11-15 11:23:51.527631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.223 [2024-11-15 11:23:51.563503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.223 [2024-11-15 11:23:51.563544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.223 [2024-11-15 11:23:51.563575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.838 ms 00:20:14.223 [2024-11-15 11:23:51.563604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.223 [2024-11-15 11:23:51.563701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.223 [2024-11-15 11:23:51.563720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.223 [2024-11-15 11:23:51.563740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:14.223 [2024-11-15 11:23:51.563753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.223 [2024-11-15 11:23:51.563864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.223 [2024-11-15 11:23:51.563882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.223 [2024-11-15 11:23:51.563899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:14.223 [2024-11-15 11:23:51.563912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.223 [2024-11-15 11:23:51.564972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.223 [2024-11-15 11:23:51.569096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3850.033 ms, result 0 00:20:14.223 [2024-11-15 11:23:51.570057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.223 { 00:20:14.223 "name": "ftl0", 00:20:14.223 "uuid": "b74028cb-3aa2-4783-bfa5-17ab25fa65a1" 00:20:14.223 } 00:20:14.223 11:23:51 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:14.223 11:23:51 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:20:14.223 11:23:51 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:14.223 11:23:51 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:20:14.223 11:23:51 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:14.223 11:23:51 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:14.223 11:23:51 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:14.482 11:23:51 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:14.742 [ 00:20:14.742 { 00:20:14.742 "name": "ftl0", 00:20:14.742 "aliases": [ 00:20:14.742 "b74028cb-3aa2-4783-bfa5-17ab25fa65a1" 00:20:14.742 ], 00:20:14.742 "product_name": "FTL disk", 00:20:14.742 "block_size": 4096, 00:20:14.742 "num_blocks": 23592960, 00:20:14.742 "uuid": "b74028cb-3aa2-4783-bfa5-17ab25fa65a1", 00:20:14.742 "assigned_rate_limits": { 00:20:14.742 "rw_ios_per_sec": 0, 00:20:14.742 "rw_mbytes_per_sec": 0, 00:20:14.742 "r_mbytes_per_sec": 0, 00:20:14.742 "w_mbytes_per_sec": 0 00:20:14.742 }, 00:20:14.742 "claimed": false, 00:20:14.742 "zoned": false, 00:20:14.742 "supported_io_types": { 00:20:14.742 "read": true, 00:20:14.742 "write": true, 00:20:14.742 "unmap": true, 00:20:14.742 "flush": true, 00:20:14.742 "reset": false, 00:20:14.742 "nvme_admin": false, 00:20:14.742 "nvme_io": false, 00:20:14.742 "nvme_io_md": false, 00:20:14.742 "write_zeroes": true, 00:20:14.742 "zcopy": false, 00:20:14.742 "get_zone_info": false, 00:20:14.742 "zone_management": false, 00:20:14.742 "zone_append": false, 00:20:14.742 "compare": false, 00:20:14.742 "compare_and_write": false, 00:20:14.742 "abort": false, 00:20:14.742 "seek_hole": false, 00:20:14.742 "seek_data": false, 00:20:14.742 "copy": false, 00:20:14.742 "nvme_iov_md": false 00:20:14.742 }, 00:20:14.742 "driver_specific": { 00:20:14.742 "ftl": { 00:20:14.742 "base_bdev": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:14.742 "cache": "nvc0n1p0" 00:20:14.742 } 00:20:14.742 } 00:20:14.742 } 00:20:14.742 ] 00:20:14.742 11:23:52 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:20:14.742 11:23:52 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:14.742 11:23:52 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:15.001 11:23:52 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:15.002 11:23:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:15.261 11:23:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:15.261 { 00:20:15.261 "name": "ftl0", 00:20:15.261 "aliases": [ 00:20:15.261 "b74028cb-3aa2-4783-bfa5-17ab25fa65a1" 00:20:15.261 ], 00:20:15.261 "product_name": "FTL disk", 00:20:15.261 "block_size": 4096, 00:20:15.261 "num_blocks": 23592960, 00:20:15.261 "uuid": "b74028cb-3aa2-4783-bfa5-17ab25fa65a1", 00:20:15.261 "assigned_rate_limits": { 00:20:15.261 "rw_ios_per_sec": 0, 00:20:15.261 "rw_mbytes_per_sec": 0, 00:20:15.261 "r_mbytes_per_sec": 0, 00:20:15.261 "w_mbytes_per_sec": 0 00:20:15.261 }, 00:20:15.261 "claimed": false, 00:20:15.261 "zoned": false, 00:20:15.261 "supported_io_types": { 00:20:15.261 "read": true, 00:20:15.261 "write": true, 00:20:15.261 "unmap": true, 00:20:15.261 "flush": true, 00:20:15.261 "reset": false, 00:20:15.261 "nvme_admin": false, 00:20:15.261 "nvme_io": false, 00:20:15.261 "nvme_io_md": false, 00:20:15.261 "write_zeroes": true, 00:20:15.261 "zcopy": false, 00:20:15.261 "get_zone_info": false, 00:20:15.261 "zone_management": false, 00:20:15.261 "zone_append": false, 00:20:15.261 "compare": false, 00:20:15.261 "compare_and_write": false, 00:20:15.261 "abort": false, 00:20:15.261 "seek_hole": false, 00:20:15.261 "seek_data": false, 00:20:15.261 "copy": false, 00:20:15.261 "nvme_iov_md": false 00:20:15.261 }, 00:20:15.261 "driver_specific": { 00:20:15.261 "ftl": { 00:20:15.261 "base_bdev": "8a72390b-53ed-4b5c-a6c9-5602ae49ca3e", 00:20:15.261 "cache": "nvc0n1p0" 00:20:15.261 } 00:20:15.261 } 00:20:15.261 } 00:20:15.261 ]' 00:20:15.261 11:23:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:15.261 11:23:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:15.261 11:23:52 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:15.261 [2024-11-15 11:23:52.630339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.261 [2024-11-15 11:23:52.630417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.261 [2024-11-15 11:23:52.630442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:15.261 [2024-11-15 11:23:52.630461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.261 [2024-11-15 11:23:52.630505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.261 [2024-11-15 11:23:52.635145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.261 [2024-11-15 11:23:52.635180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.261 [2024-11-15 11:23:52.635203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.620 ms 00:20:15.261 [2024-11-15 11:23:52.635215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.261 [2024-11-15 11:23:52.635979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.261 [2024-11-15 11:23:52.636007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.261 [2024-11-15 11:23:52.636023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:20:15.261 [2024-11-15 11:23:52.636034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.261 [2024-11-15 11:23:52.638845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.261 [2024-11-15 11:23:52.639051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.261 [2024-11-15 11:23:52.639079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:20:15.261 [2024-11-15 11:23:52.639091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.261 [2024-11-15 11:23:52.644742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.261 [2024-11-15 11:23:52.644776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.261 [2024-11-15 11:23:52.644791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.603 ms 00:20:15.261 [2024-11-15 11:23:52.644802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.682907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.521 [2024-11-15 11:23:52.682945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.521 [2024-11-15 11:23:52.682967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.066 ms 00:20:15.521 [2024-11-15 11:23:52.682977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.706813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.521 [2024-11-15 11:23:52.706852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.521 [2024-11-15 11:23:52.706871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.777 ms 00:20:15.521 [2024-11-15 11:23:52.706886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.707157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.521 [2024-11-15 11:23:52.707172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.521 [2024-11-15 11:23:52.707187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:20:15.521 [2024-11-15 11:23:52.707197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.744283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.521 [2024-11-15 11:23:52.744320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.521 [2024-11-15 11:23:52.744337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.106 ms 00:20:15.521 [2024-11-15 11:23:52.744347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.780718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.521 [2024-11-15 11:23:52.780754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.521 [2024-11-15 11:23:52.780775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.331 ms 00:20:15.521 [2024-11-15 11:23:52.780785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.817841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.521 [2024-11-15 11:23:52.818028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.521 [2024-11-15 11:23:52.818054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.021 ms 00:20:15.521 [2024-11-15 11:23:52.818064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.854258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.521 [2024-11-15 11:23:52.854293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.521 [2024-11-15 11:23:52.854310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.073 ms 00:20:15.521 [2024-11-15 11:23:52.854321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.521 [2024-11-15 11:23:52.854411] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.521 [2024-11-15 11:23:52.854431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.521 [2024-11-15 11:23:52.854857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.854997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.522 [2024-11-15 11:23:52.855804] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.522 [2024-11-15 11:23:52.855820] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:20:15.522 [2024-11-15 11:23:52.855832] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.522 [2024-11-15 11:23:52.855845] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.522 [2024-11-15 11:23:52.855855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.522 [2024-11-15 11:23:52.855872] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.522 [2024-11-15 11:23:52.855882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.522 [2024-11-15 11:23:52.855896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.522 [2024-11-15 11:23:52.855906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.522 [2024-11-15 11:23:52.855919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.522 [2024-11-15 11:23:52.855928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.522 [2024-11-15 11:23:52.855940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.522 [2024-11-15 11:23:52.855951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.522 [2024-11-15 11:23:52.855966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:20:15.522 [2024-11-15 11:23:52.855976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.522 [2024-11-15 11:23:52.877362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.522 [2024-11-15 11:23:52.877532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.522 [2024-11-15 11:23:52.877576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.372 ms 00:20:15.522 [2024-11-15 11:23:52.877588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.522 [2024-11-15 11:23:52.878311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.522 [2024-11-15 11:23:52.878329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.522 [2024-11-15 11:23:52.878344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:20:15.522 [2024-11-15 11:23:52.878355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.781 [2024-11-15 11:23:52.953012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.781 [2024-11-15 11:23:52.953047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.781 [2024-11-15 11:23:52.953070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.781 [2024-11-15 11:23:52.953081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.781 [2024-11-15 11:23:52.953205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.781 [2024-11-15 11:23:52.953219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.781 [2024-11-15 11:23:52.953235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.781 [2024-11-15 11:23:52.953246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.781 [2024-11-15 11:23:52.953330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.781 [2024-11-15 11:23:52.953345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.781 [2024-11-15 11:23:52.953367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.781 [2024-11-15 11:23:52.953378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.781 [2024-11-15 11:23:52.953417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.781 [2024-11-15 11:23:52.953429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.781 [2024-11-15 11:23:52.953443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.781 [2024-11-15 11:23:52.953454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.781 [2024-11-15 11:23:53.098201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.781 [2024-11-15 11:23:53.098273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.781 [2024-11-15 11:23:53.098293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.781 [2024-11-15 11:23:53.098306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.205528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.040 [2024-11-15 11:23:53.205607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.040 [2024-11-15 11:23:53.205629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.040 [2024-11-15 11:23:53.205641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.205808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.040 [2024-11-15 11:23:53.205821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.040 [2024-11-15 11:23:53.205859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.040 [2024-11-15 11:23:53.205875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.205948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.040 [2024-11-15 11:23:53.205960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.040 [2024-11-15 11:23:53.205974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.040 [2024-11-15 11:23:53.205985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.206154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.040 [2024-11-15 11:23:53.206170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.040 [2024-11-15 11:23:53.206184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.040 [2024-11-15 11:23:53.206199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.206266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.040 [2024-11-15 11:23:53.206280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.040 [2024-11-15 11:23:53.206295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.040 [2024-11-15 11:23:53.206305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.206382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.040 [2024-11-15 11:23:53.206394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.040 [2024-11-15 11:23:53.206421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.040 [2024-11-15 11:23:53.206433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.206513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.040 [2024-11-15 11:23:53.206533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.040 [2024-11-15 11:23:53.206549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.040 [2024-11-15 11:23:53.206571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.040 [2024-11-15 11:23:53.206827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 577.400 ms, result 0 00:20:16.040 true 00:20:16.040 11:23:53 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75441 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75441 ']' 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75441 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75441 00:20:16.040 killing process with pid 75441 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75441' 00:20:16.040 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75441 00:20:16.041 11:23:53 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75441 00:20:21.309 11:23:58 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:22.245 65536+0 records in 00:20:22.245 65536+0 records out 00:20:22.245 268435456 bytes (268 MB, 256 MiB) copied, 1.01765 s, 264 MB/s 00:20:22.245 11:23:59 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:22.504 [2024-11-15 11:23:59.715378] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:20:22.504 [2024-11-15 11:23:59.715670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75656 ] 00:20:22.504 [2024-11-15 11:23:59.896727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.822 [2024-11-15 11:24:00.008674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.081 [2024-11-15 11:24:00.374818] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.081 [2024-11-15 11:24:00.375099] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.341 [2024-11-15 11:24:00.537325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.341 [2024-11-15 11:24:00.537381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:23.341 [2024-11-15 11:24:00.537397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:23.341 [2024-11-15 11:24:00.537408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.341 [2024-11-15 11:24:00.540550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.341 [2024-11-15 11:24:00.540599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.341 [2024-11-15 11:24:00.540612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:20:23.341 [2024-11-15 11:24:00.540623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.341 [2024-11-15 11:24:00.540719] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:23.341 [2024-11-15 11:24:00.541747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:23.341 [2024-11-15 11:24:00.541781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.341 [2024-11-15 11:24:00.541792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.341 [2024-11-15 11:24:00.541803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:20:23.342 [2024-11-15 11:24:00.541812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.543293] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:23.342 [2024-11-15 11:24:00.562367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.562422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:23.342 [2024-11-15 11:24:00.562437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.106 ms 00:20:23.342 [2024-11-15 11:24:00.562447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.562547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.562588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:23.342 [2024-11-15 11:24:00.562599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:23.342 [2024-11-15 11:24:00.562610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.569252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.569279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.342 [2024-11-15 11:24:00.569291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.611 ms 00:20:23.342 [2024-11-15 11:24:00.569301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.569396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.569411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.342 [2024-11-15 11:24:00.569421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:23.342 [2024-11-15 11:24:00.569432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.569462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.569477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:23.342 [2024-11-15 11:24:00.569487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.342 [2024-11-15 11:24:00.569497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.569519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:23.342 [2024-11-15 11:24:00.574309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.574340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.342 [2024-11-15 11:24:00.574353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:20:23.342 [2024-11-15 11:24:00.574363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.574429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.574441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:23.342 [2024-11-15 11:24:00.574452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:23.342 [2024-11-15 11:24:00.574462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.574482] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:23.342 [2024-11-15 11:24:00.574507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:23.342 [2024-11-15 11:24:00.574543] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:23.342 [2024-11-15 11:24:00.574574] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:23.342 [2024-11-15 11:24:00.574664] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:23.342 [2024-11-15 11:24:00.574677] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:23.342 [2024-11-15 11:24:00.574690] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:23.342 [2024-11-15 11:24:00.574703] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:23.342 [2024-11-15 11:24:00.574719] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:23.342 [2024-11-15 11:24:00.574730] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:23.342 [2024-11-15 11:24:00.574741] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:23.342 [2024-11-15 11:24:00.574750] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:23.342 [2024-11-15 11:24:00.574760] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:23.342 [2024-11-15 11:24:00.574769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.574780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:23.342 [2024-11-15 11:24:00.574791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:23.342 [2024-11-15 11:24:00.574800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.574876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.342 [2024-11-15 11:24:00.574892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:23.342 [2024-11-15 11:24:00.574902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:23.342 [2024-11-15 11:24:00.574911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.342 [2024-11-15 11:24:00.574997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:23.342 [2024-11-15 11:24:00.575008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:23.342 [2024-11-15 11:24:00.575018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:23.342 [2024-11-15 11:24:00.575049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:23.342 [2024-11-15 11:24:00.575076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.342 [2024-11-15 11:24:00.575095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:23.342 [2024-11-15 11:24:00.575105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:23.342 [2024-11-15 11:24:00.575114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.342 [2024-11-15 11:24:00.575133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:23.342 [2024-11-15 11:24:00.575143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:23.342 [2024-11-15 11:24:00.575152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:23.342 [2024-11-15 11:24:00.575171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:23.342 [2024-11-15 11:24:00.575198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:23.342 [2024-11-15 11:24:00.575226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:23.342 [2024-11-15 11:24:00.575253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:23.342 [2024-11-15 11:24:00.575279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:23.342 [2024-11-15 11:24:00.575306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.342 [2024-11-15 11:24:00.575323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:23.342 [2024-11-15 11:24:00.575332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:23.342 [2024-11-15 11:24:00.575340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.342 [2024-11-15 11:24:00.575349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:23.342 [2024-11-15 11:24:00.575358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:23.342 [2024-11-15 11:24:00.575367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:23.342 [2024-11-15 11:24:00.575384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:23.342 [2024-11-15 11:24:00.575394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:23.342 [2024-11-15 11:24:00.575412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:23.342 [2024-11-15 11:24:00.575421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.342 [2024-11-15 11:24:00.575434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.342 [2024-11-15 11:24:00.575444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:23.343 [2024-11-15 11:24:00.575454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:23.343 [2024-11-15 11:24:00.575463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:23.343 [2024-11-15 11:24:00.575472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:23.343 [2024-11-15 11:24:00.575481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:23.343 [2024-11-15 11:24:00.575491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:23.343 [2024-11-15 11:24:00.575501] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:23.343 [2024-11-15 11:24:00.575513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.343 [2024-11-15 11:24:00.575524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:23.343 [2024-11-15 11:24:00.575534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:23.343 [2024-11-15 11:24:00.575544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:23.343 [2024-11-15 11:24:00.575554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:23.343 [2024-11-15 11:24:00.575576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:23.343 [2024-11-15 11:24:00.575586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:23.343 [2024-11-15 11:24:00.575597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:23.343 [2024-11-15 11:24:00.575607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:23.343 [2024-11-15 11:24:00.575617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:23.343 [2024-11-15 11:24:00.575627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:23.343 [2024-11-15 11:24:00.575637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:23.343 [2024-11-15 11:24:00.575647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:23.343 [2024-11-15 11:24:00.575658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:23.343 [2024-11-15 11:24:00.575668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:23.343 [2024-11-15 11:24:00.575679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:23.343 [2024-11-15 11:24:00.575689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.343 [2024-11-15 11:24:00.575700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.343 [2024-11-15 11:24:00.575710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:23.343 [2024-11-15 11:24:00.575720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:23.343 [2024-11-15 11:24:00.575730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:23.343 [2024-11-15 11:24:00.575741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.575751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:23.343 [2024-11-15 11:24:00.575765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:20:23.343 [2024-11-15 11:24:00.575775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.614904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.614940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.343 [2024-11-15 11:24:00.614955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.139 ms 00:20:23.343 [2024-11-15 11:24:00.614966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.615092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.615109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:23.343 [2024-11-15 11:24:00.615120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:23.343 [2024-11-15 11:24:00.615130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.674563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.674598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.343 [2024-11-15 11:24:00.674612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.499 ms 00:20:23.343 [2024-11-15 11:24:00.674626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.674737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.674750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.343 [2024-11-15 11:24:00.674762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:23.343 [2024-11-15 11:24:00.674772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.675203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.675216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.343 [2024-11-15 11:24:00.675228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:20:23.343 [2024-11-15 11:24:00.675241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.675360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.675373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.343 [2024-11-15 11:24:00.675385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:23.343 [2024-11-15 11:24:00.675395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.693447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.693484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.343 [2024-11-15 11:24:00.693499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.058 ms 00:20:23.343 [2024-11-15 11:24:00.693510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.343 [2024-11-15 11:24:00.712478] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:23.343 [2024-11-15 11:24:00.712518] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:23.343 [2024-11-15 11:24:00.712534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.343 [2024-11-15 11:24:00.712545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:23.343 [2024-11-15 11:24:00.712576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.912 ms 00:20:23.343 [2024-11-15 11:24:00.712588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.742461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.742636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:23.602 [2024-11-15 11:24:00.742691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.839 ms 00:20:23.602 [2024-11-15 11:24:00.742703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.761259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.761296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:23.602 [2024-11-15 11:24:00.761309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.448 ms 00:20:23.602 [2024-11-15 11:24:00.761319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.778983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.779019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:23.602 [2024-11-15 11:24:00.779033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.616 ms 00:20:23.602 [2024-11-15 11:24:00.779042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.779863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.779888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:23.602 [2024-11-15 11:24:00.779900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:20:23.602 [2024-11-15 11:24:00.779910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.864369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.864606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:23.602 [2024-11-15 11:24:00.864633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.566 ms 00:20:23.602 [2024-11-15 11:24:00.864645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.875441] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:23.602 [2024-11-15 11:24:00.891417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.891619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:23.602 [2024-11-15 11:24:00.891645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.657 ms 00:20:23.602 [2024-11-15 11:24:00.891658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.891787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.891805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:23.602 [2024-11-15 11:24:00.891817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.602 [2024-11-15 11:24:00.891827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.891883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.891895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:23.602 [2024-11-15 11:24:00.891905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:23.602 [2024-11-15 11:24:00.891916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.891951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.891964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:23.602 [2024-11-15 11:24:00.891977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:23.602 [2024-11-15 11:24:00.891988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.892024] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:23.602 [2024-11-15 11:24:00.892037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.892047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:23.602 [2024-11-15 11:24:00.892057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:23.602 [2024-11-15 11:24:00.892067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.928649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.928693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:23.602 [2024-11-15 11:24:00.928707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.619 ms 00:20:23.602 [2024-11-15 11:24:00.928718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.928830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.602 [2024-11-15 11:24:00.928844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:23.602 [2024-11-15 11:24:00.928855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:23.602 [2024-11-15 11:24:00.928866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.602 [2024-11-15 11:24:00.929754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.602 [2024-11-15 11:24:00.934068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.760 ms, result 0 00:20:23.602 [2024-11-15 11:24:00.934950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:23.602 [2024-11-15 11:24:00.953302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.977  [2024-11-15T11:24:03.312Z] Copying: 25/256 [MB] (25 MBps) [2024-11-15T11:24:04.246Z] Copying: 51/256 [MB] (25 MBps) [2024-11-15T11:24:05.182Z] Copying: 77/256 [MB] (26 MBps) [2024-11-15T11:24:06.118Z] Copying: 104/256 [MB] (26 MBps) [2024-11-15T11:24:07.054Z] Copying: 130/256 [MB] (25 MBps) [2024-11-15T11:24:07.989Z] Copying: 155/256 [MB] (25 MBps) [2024-11-15T11:24:09.366Z] Copying: 181/256 [MB] (25 MBps) [2024-11-15T11:24:10.303Z] Copying: 207/256 [MB] (25 MBps) [2024-11-15T11:24:10.870Z] Copying: 232/256 [MB] (25 MBps) [2024-11-15T11:24:10.870Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-15 11:24:10.816277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.469 [2024-11-15 11:24:10.830870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-15 11:24:10.831011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.469 [2024-11-15 11:24:10.831034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:33.469 [2024-11-15 11:24:10.831045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-15 11:24:10.831085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:33.469 [2024-11-15 11:24:10.835201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-15 11:24:10.835230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.469 [2024-11-15 11:24:10.835244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.106 ms 00:20:33.469 [2024-11-15 11:24:10.835255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-15 11:24:10.837197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-15 11:24:10.837234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.469 [2024-11-15 11:24:10.837248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.919 ms 00:20:33.469 [2024-11-15 11:24:10.837257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-15 11:24:10.844083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-15 11:24:10.844227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.469 [2024-11-15 11:24:10.844254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:20:33.469 [2024-11-15 11:24:10.844264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-15 11:24:10.849903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-15 11:24:10.849935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:33.469 [2024-11-15 11:24:10.849946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.610 ms 00:20:33.469 [2024-11-15 11:24:10.849956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:10.886008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.728 [2024-11-15 11:24:10.886045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.728 [2024-11-15 11:24:10.886059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.065 ms 00:20:33.728 [2024-11-15 11:24:10.886068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:10.907137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.728 [2024-11-15 11:24:10.907174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.728 [2024-11-15 11:24:10.907193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.035 ms 00:20:33.728 [2024-11-15 11:24:10.907207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:10.907337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.728 [2024-11-15 11:24:10.907350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.728 [2024-11-15 11:24:10.907362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:33.728 [2024-11-15 11:24:10.907371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:10.943927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.728 [2024-11-15 11:24:10.943961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:33.728 [2024-11-15 11:24:10.943973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.597 ms 00:20:33.728 [2024-11-15 11:24:10.943983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:10.979698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.728 [2024-11-15 11:24:10.979732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:33.728 [2024-11-15 11:24:10.979744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.715 ms 00:20:33.728 [2024-11-15 11:24:10.979754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:11.016634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.728 [2024-11-15 11:24:11.016670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.728 [2024-11-15 11:24:11.016683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.885 ms 00:20:33.728 [2024-11-15 11:24:11.016692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:11.052405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.728 [2024-11-15 11:24:11.052441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.728 [2024-11-15 11:24:11.052454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.688 ms 00:20:33.728 [2024-11-15 11:24:11.052464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.728 [2024-11-15 11:24:11.052520] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.728 [2024-11-15 11:24:11.052544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.728 [2024-11-15 11:24:11.052739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.052998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.729 [2024-11-15 11:24:11.053622] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.729 [2024-11-15 11:24:11.053631] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:20:33.729 [2024-11-15 11:24:11.053642] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.729 [2024-11-15 11:24:11.053651] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.729 [2024-11-15 11:24:11.053661] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.729 [2024-11-15 11:24:11.053671] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.729 [2024-11-15 11:24:11.053681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.729 [2024-11-15 11:24:11.053690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.729 [2024-11-15 11:24:11.053700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.729 [2024-11-15 11:24:11.053709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.730 [2024-11-15 11:24:11.053718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.730 [2024-11-15 11:24:11.053728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.730 [2024-11-15 11:24:11.053737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.730 [2024-11-15 11:24:11.053752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.210 ms 00:20:33.730 [2024-11-15 11:24:11.053761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.730 [2024-11-15 11:24:11.073688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.730 [2024-11-15 11:24:11.073822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.730 [2024-11-15 11:24:11.073842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.938 ms 00:20:33.730 [2024-11-15 11:24:11.073853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.730 [2024-11-15 11:24:11.074410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.730 [2024-11-15 11:24:11.074429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.730 [2024-11-15 11:24:11.074440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:20:33.730 [2024-11-15 11:24:11.074450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.988 [2024-11-15 11:24:11.128355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.988 [2024-11-15 11:24:11.128392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.988 [2024-11-15 11:24:11.128406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.988 [2024-11-15 11:24:11.128417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.988 [2024-11-15 11:24:11.128515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.988 [2024-11-15 11:24:11.128531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.988 [2024-11-15 11:24:11.128542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.988 [2024-11-15 11:24:11.128552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.988 [2024-11-15 11:24:11.128614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.988 [2024-11-15 11:24:11.128627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.988 [2024-11-15 11:24:11.128638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.988 [2024-11-15 11:24:11.128649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.988 [2024-11-15 11:24:11.128681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.988 [2024-11-15 11:24:11.128691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.988 [2024-11-15 11:24:11.128705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.988 [2024-11-15 11:24:11.128715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.988 [2024-11-15 11:24:11.252861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.988 [2024-11-15 11:24:11.252911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.988 [2024-11-15 11:24:11.252927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.988 [2024-11-15 11:24:11.252938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.988 [2024-11-15 11:24:11.353486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.988 [2024-11-15 11:24:11.353542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.988 [2024-11-15 11:24:11.353574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.988 [2024-11-15 11:24:11.353586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.988 [2024-11-15 11:24:11.353679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.988 [2024-11-15 11:24:11.353692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.989 [2024-11-15 11:24:11.353704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.989 [2024-11-15 11:24:11.353714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.989 [2024-11-15 11:24:11.353743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.989 [2024-11-15 11:24:11.353754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.989 [2024-11-15 11:24:11.353765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.989 [2024-11-15 11:24:11.353779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.989 [2024-11-15 11:24:11.353889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.989 [2024-11-15 11:24:11.353903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.989 [2024-11-15 11:24:11.353914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.989 [2024-11-15 11:24:11.353924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.989 [2024-11-15 11:24:11.353961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.989 [2024-11-15 11:24:11.353972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.989 [2024-11-15 11:24:11.353983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.989 [2024-11-15 11:24:11.353992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.989 [2024-11-15 11:24:11.354038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.989 [2024-11-15 11:24:11.354050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.989 [2024-11-15 11:24:11.354060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.989 [2024-11-15 11:24:11.354070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.989 [2024-11-15 11:24:11.354112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.989 [2024-11-15 11:24:11.354124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.989 [2024-11-15 11:24:11.354134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.989 [2024-11-15 11:24:11.354157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.989 [2024-11-15 11:24:11.354297] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.269 ms, result 0 00:20:35.362 00:20:35.362 00:20:35.362 11:24:12 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75786 00:20:35.363 11:24:12 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:35.363 11:24:12 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75786 00:20:35.363 11:24:12 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75786 ']' 00:20:35.363 11:24:12 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.363 11:24:12 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.363 11:24:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.363 11:24:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.363 11:24:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:35.363 [2024-11-15 11:24:12.611244] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:20:35.363 [2024-11-15 11:24:12.611370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75786 ] 00:20:35.621 [2024-11-15 11:24:12.789138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.621 [2024-11-15 11:24:12.903661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.557 11:24:13 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.557 11:24:13 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:36.557 11:24:13 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:36.816 [2024-11-15 11:24:13.979094] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.816 [2024-11-15 11:24:13.979161] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.816 [2024-11-15 11:24:14.160959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.161179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.816 [2024-11-15 11:24:14.161213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:36.816 [2024-11-15 11:24:14.161224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.164962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.164999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.816 [2024-11-15 11:24:14.165014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.716 ms 00:20:36.816 [2024-11-15 11:24:14.165024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.165132] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.816 [2024-11-15 11:24:14.166110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.816 [2024-11-15 11:24:14.166145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.166164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.816 [2024-11-15 11:24:14.166177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:20:36.816 [2024-11-15 11:24:14.166187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.167650] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.816 [2024-11-15 11:24:14.187518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.187582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.816 [2024-11-15 11:24:14.187598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.905 ms 00:20:36.816 [2024-11-15 11:24:14.187611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.187707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.187723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.816 [2024-11-15 11:24:14.187735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:36.816 [2024-11-15 11:24:14.187748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.194302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.194345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.816 [2024-11-15 11:24:14.194358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.516 ms 00:20:36.816 [2024-11-15 11:24:14.194373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.194510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.194530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.816 [2024-11-15 11:24:14.194541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:36.816 [2024-11-15 11:24:14.194575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.194611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.194627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.816 [2024-11-15 11:24:14.194638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:36.816 [2024-11-15 11:24:14.194652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.194678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:36.816 [2024-11-15 11:24:14.199315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.199345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.816 [2024-11-15 11:24:14.199362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:20:36.816 [2024-11-15 11:24:14.199373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.199448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.199461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.816 [2024-11-15 11:24:14.199476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:36.816 [2024-11-15 11:24:14.199492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-15 11:24:14.199519] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.816 [2024-11-15 11:24:14.199543] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.816 [2024-11-15 11:24:14.199608] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.816 [2024-11-15 11:24:14.199628] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:36.816 [2024-11-15 11:24:14.199722] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.816 [2024-11-15 11:24:14.199735] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.816 [2024-11-15 11:24:14.199760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:36.816 [2024-11-15 11:24:14.199774] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.816 [2024-11-15 11:24:14.199791] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.816 [2024-11-15 11:24:14.199804] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:36.816 [2024-11-15 11:24:14.199819] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.816 [2024-11-15 11:24:14.199829] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.816 [2024-11-15 11:24:14.199848] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.816 [2024-11-15 11:24:14.199859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-15 11:24:14.199874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.817 [2024-11-15 11:24:14.199885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:20:36.817 [2024-11-15 11:24:14.199899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-15 11:24:14.199980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-15 11:24:14.199996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.817 [2024-11-15 11:24:14.200007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:36.817 [2024-11-15 11:24:14.200022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-15 11:24:14.200119] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.817 [2024-11-15 11:24:14.200139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.817 [2024-11-15 11:24:14.200150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.817 [2024-11-15 11:24:14.200195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.817 [2024-11-15 11:24:14.200235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.817 [2024-11-15 11:24:14.200259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.817 [2024-11-15 11:24:14.200274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:36.817 [2024-11-15 11:24:14.200284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.817 [2024-11-15 11:24:14.200298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.817 [2024-11-15 11:24:14.200308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:36.817 [2024-11-15 11:24:14.200322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.817 [2024-11-15 11:24:14.200345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.817 [2024-11-15 11:24:14.200389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.817 [2024-11-15 11:24:14.200431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.817 [2024-11-15 11:24:14.200465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.817 [2024-11-15 11:24:14.200504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.817 [2024-11-15 11:24:14.200538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.817 [2024-11-15 11:24:14.200573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.817 [2024-11-15 11:24:14.200588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:36.817 [2024-11-15 11:24:14.200597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.817 [2024-11-15 11:24:14.200611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.817 [2024-11-15 11:24:14.200621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:36.817 [2024-11-15 11:24:14.200639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.817 [2024-11-15 11:24:14.200663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:36.817 [2024-11-15 11:24:14.200673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200686] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.817 [2024-11-15 11:24:14.200702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.817 [2024-11-15 11:24:14.200716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.817 [2024-11-15 11:24:14.200741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.817 [2024-11-15 11:24:14.200751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.817 [2024-11-15 11:24:14.200765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.817 [2024-11-15 11:24:14.200775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.817 [2024-11-15 11:24:14.200789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.817 [2024-11-15 11:24:14.200798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.817 [2024-11-15 11:24:14.200814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.817 [2024-11-15 11:24:14.200827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.817 [2024-11-15 11:24:14.200849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:36.817 [2024-11-15 11:24:14.200861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:36.817 [2024-11-15 11:24:14.200875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:36.817 [2024-11-15 11:24:14.200886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:36.817 [2024-11-15 11:24:14.200901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:36.817 [2024-11-15 11:24:14.200912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:36.817 [2024-11-15 11:24:14.200927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:36.817 [2024-11-15 11:24:14.200937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:36.817 [2024-11-15 11:24:14.200952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:36.817 [2024-11-15 11:24:14.200963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:36.817 [2024-11-15 11:24:14.200978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:36.817 [2024-11-15 11:24:14.200990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:36.817 [2024-11-15 11:24:14.201005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:36.817 [2024-11-15 11:24:14.201016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:36.817 [2024-11-15 11:24:14.201031] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.817 [2024-11-15 11:24:14.201042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.817 [2024-11-15 11:24:14.201063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.817 [2024-11-15 11:24:14.201073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.817 [2024-11-15 11:24:14.201088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.817 [2024-11-15 11:24:14.201099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.817 [2024-11-15 11:24:14.201114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-15 11:24:14.201127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.817 [2024-11-15 11:24:14.201141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:20:36.817 [2024-11-15 11:24:14.201152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.076 [2024-11-15 11:24:14.241898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.076 [2024-11-15 11:24:14.242060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.076 [2024-11-15 11:24:14.242091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.738 ms 00:20:37.077 [2024-11-15 11:24:14.242109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.242243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.242257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:37.077 [2024-11-15 11:24:14.242273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:37.077 [2024-11-15 11:24:14.242283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.291411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.291468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.077 [2024-11-15 11:24:14.291489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.176 ms 00:20:37.077 [2024-11-15 11:24:14.291499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.291608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.291622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.077 [2024-11-15 11:24:14.291638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:37.077 [2024-11-15 11:24:14.291648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.292101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.292114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.077 [2024-11-15 11:24:14.292136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:20:37.077 [2024-11-15 11:24:14.292146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.292270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.292288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.077 [2024-11-15 11:24:14.292304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:37.077 [2024-11-15 11:24:14.292314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.313097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.313130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.077 [2024-11-15 11:24:14.313149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.785 ms 00:20:37.077 [2024-11-15 11:24:14.313160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.346112] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:37.077 [2024-11-15 11:24:14.346154] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:37.077 [2024-11-15 11:24:14.346175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.346187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:37.077 [2024-11-15 11:24:14.346204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.954 ms 00:20:37.077 [2024-11-15 11:24:14.346214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.375631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.375668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:37.077 [2024-11-15 11:24:14.375687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.376 ms 00:20:37.077 [2024-11-15 11:24:14.375714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.393853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.393904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:37.077 [2024-11-15 11:24:14.393929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.078 ms 00:20:37.077 [2024-11-15 11:24:14.393939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.411743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.411776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:37.077 [2024-11-15 11:24:14.411795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.734 ms 00:20:37.077 [2024-11-15 11:24:14.411805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-15 11:24:14.412632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-15 11:24:14.412659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:37.077 [2024-11-15 11:24:14.412675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:20:37.077 [2024-11-15 11:24:14.412686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.500338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.500390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:37.336 [2024-11-15 11:24:14.500410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.760 ms 00:20:37.336 [2024-11-15 11:24:14.500421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.511618] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:37.336 [2024-11-15 11:24:14.527687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.527742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:37.336 [2024-11-15 11:24:14.527770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.191 ms 00:20:37.336 [2024-11-15 11:24:14.527785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.527896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.527915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:37.336 [2024-11-15 11:24:14.527927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:37.336 [2024-11-15 11:24:14.527943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.528000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.528017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:37.336 [2024-11-15 11:24:14.528028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:37.336 [2024-11-15 11:24:14.528048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.528074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.528090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:37.336 [2024-11-15 11:24:14.528100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.336 [2024-11-15 11:24:14.528116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.528160] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:37.336 [2024-11-15 11:24:14.528184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.528194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:37.336 [2024-11-15 11:24:14.528215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:37.336 [2024-11-15 11:24:14.528226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.564089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.564129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:37.336 [2024-11-15 11:24:14.564150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.880 ms 00:20:37.336 [2024-11-15 11:24:14.564161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.565589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-15 11:24:14.565610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:37.336 [2024-11-15 11:24:14.565627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:37.336 [2024-11-15 11:24:14.565643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-15 11:24:14.566637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.336 [2024-11-15 11:24:14.570804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 406.001 ms, result 0 00:20:37.336 [2024-11-15 11:24:14.571983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.336 Some configs were skipped because the RPC state that can call them passed over. 00:20:37.336 11:24:14 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:37.594 [2024-11-15 11:24:14.823282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.594 [2024-11-15 11:24:14.823475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:37.594 [2024-11-15 11:24:14.823610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:20:37.594 [2024-11-15 11:24:14.823664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.594 [2024-11-15 11:24:14.823748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.876 ms, result 0 00:20:37.594 true 00:20:37.594 11:24:14 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:37.852 [2024-11-15 11:24:15.030831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.852 [2024-11-15 11:24:15.030880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:37.852 [2024-11-15 11:24:15.030901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:20:37.852 [2024-11-15 11:24:15.030911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.852 [2024-11-15 11:24:15.030960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.207 ms, result 0 00:20:37.852 true 00:20:37.852 11:24:15 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75786 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75786 ']' 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75786 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75786 00:20:37.852 killing process with pid 75786 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75786' 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75786 00:20:37.852 11:24:15 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75786 00:20:39.229 [2024-11-15 11:24:16.224094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.229 [2024-11-15 11:24:16.224161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.230 [2024-11-15 11:24:16.224178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.230 [2024-11-15 11:24:16.224191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.224217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:39.230 [2024-11-15 11:24:16.228435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.228470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.230 [2024-11-15 11:24:16.228488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.202 ms 00:20:39.230 [2024-11-15 11:24:16.228499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.228767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.228781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.230 [2024-11-15 11:24:16.228794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:20:39.230 [2024-11-15 11:24:16.228804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.232085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.232121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.230 [2024-11-15 11:24:16.232138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:20:39.230 [2024-11-15 11:24:16.232149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.237840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.237876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.230 [2024-11-15 11:24:16.237891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.660 ms 00:20:39.230 [2024-11-15 11:24:16.237901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.253147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.253179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.230 [2024-11-15 11:24:16.253198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.211 ms 00:20:39.230 [2024-11-15 11:24:16.253217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.263805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.263843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.230 [2024-11-15 11:24:16.263859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.532 ms 00:20:39.230 [2024-11-15 11:24:16.263869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.264015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.264028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.230 [2024-11-15 11:24:16.264041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:39.230 [2024-11-15 11:24:16.264051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.279879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.279912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.230 [2024-11-15 11:24:16.279927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.830 ms 00:20:39.230 [2024-11-15 11:24:16.279936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.295136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.295278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.230 [2024-11-15 11:24:16.295313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.166 ms 00:20:39.230 [2024-11-15 11:24:16.295323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.309634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.309767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.230 [2024-11-15 11:24:16.309799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.273 ms 00:20:39.230 [2024-11-15 11:24:16.309809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.323519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.230 [2024-11-15 11:24:16.323550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.230 [2024-11-15 11:24:16.323579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.612 ms 00:20:39.230 [2024-11-15 11:24:16.323589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.230 [2024-11-15 11:24:16.323631] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.230 [2024-11-15 11:24:16.323647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.323992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.230 [2024-11-15 11:24:16.324314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.324996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.231 [2024-11-15 11:24:16.325013] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.231 [2024-11-15 11:24:16.325038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:20:39.231 [2024-11-15 11:24:16.325062] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.231 [2024-11-15 11:24:16.325084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.231 [2024-11-15 11:24:16.325094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.231 [2024-11-15 11:24:16.325109] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.231 [2024-11-15 11:24:16.325119] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.231 [2024-11-15 11:24:16.325134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.231 [2024-11-15 11:24:16.325144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.231 [2024-11-15 11:24:16.325158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.231 [2024-11-15 11:24:16.325167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.231 [2024-11-15 11:24:16.325181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.231 [2024-11-15 11:24:16.325191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.231 [2024-11-15 11:24:16.325206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:20:39.231 [2024-11-15 11:24:16.325216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.231 [2024-11-15 11:24:16.345037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.231 [2024-11-15 11:24:16.345163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.231 [2024-11-15 11:24:16.345196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.806 ms 00:20:39.231 [2024-11-15 11:24:16.345207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.231 [2024-11-15 11:24:16.345846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.231 [2024-11-15 11:24:16.345865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.231 [2024-11-15 11:24:16.345882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:20:39.231 [2024-11-15 11:24:16.345898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.231 [2024-11-15 11:24:16.417132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.231 [2024-11-15 11:24:16.417168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.231 [2024-11-15 11:24:16.417185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.231 [2024-11-15 11:24:16.417195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.231 [2024-11-15 11:24:16.417281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.231 [2024-11-15 11:24:16.417294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.231 [2024-11-15 11:24:16.417312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.231 [2024-11-15 11:24:16.417329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.231 [2024-11-15 11:24:16.417383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.231 [2024-11-15 11:24:16.417396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.231 [2024-11-15 11:24:16.417416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.231 [2024-11-15 11:24:16.417426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.231 [2024-11-15 11:24:16.417451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.231 [2024-11-15 11:24:16.417462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.231 [2024-11-15 11:24:16.417477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.231 [2024-11-15 11:24:16.417488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.231 [2024-11-15 11:24:16.542868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.231 [2024-11-15 11:24:16.543080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.231 [2024-11-15 11:24:16.543113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.231 [2024-11-15 11:24:16.543124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.642761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.490 [2024-11-15 11:24:16.642812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.490 [2024-11-15 11:24:16.642833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.490 [2024-11-15 11:24:16.642849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.642942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.490 [2024-11-15 11:24:16.642955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.490 [2024-11-15 11:24:16.642976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.490 [2024-11-15 11:24:16.642986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.643040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.490 [2024-11-15 11:24:16.643051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.490 [2024-11-15 11:24:16.643067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.490 [2024-11-15 11:24:16.643077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.643203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.490 [2024-11-15 11:24:16.643217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.490 [2024-11-15 11:24:16.643231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.490 [2024-11-15 11:24:16.643241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.643284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.490 [2024-11-15 11:24:16.643296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.490 [2024-11-15 11:24:16.643309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.490 [2024-11-15 11:24:16.643319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.643365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.490 [2024-11-15 11:24:16.643376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.490 [2024-11-15 11:24:16.643391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.490 [2024-11-15 11:24:16.643401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.643447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.490 [2024-11-15 11:24:16.643460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.490 [2024-11-15 11:24:16.643472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.490 [2024-11-15 11:24:16.643482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.490 [2024-11-15 11:24:16.643647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.206 ms, result 0 00:20:40.424 11:24:17 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:40.424 11:24:17 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:40.424 [2024-11-15 11:24:17.749258] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:20:40.424 [2024-11-15 11:24:17.749379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75850 ] 00:20:40.682 [2024-11-15 11:24:17.931292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.682 [2024-11-15 11:24:18.037772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.249 [2024-11-15 11:24:18.360651] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.249 [2024-11-15 11:24:18.360722] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.249 [2024-11-15 11:24:18.521965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.522017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:41.250 [2024-11-15 11:24:18.522032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:41.250 [2024-11-15 11:24:18.522043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.525117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.525156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.250 [2024-11-15 11:24:18.525169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:20:41.250 [2024-11-15 11:24:18.525179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.525276] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:41.250 [2024-11-15 11:24:18.526236] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:41.250 [2024-11-15 11:24:18.526269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.526280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.250 [2024-11-15 11:24:18.526291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:20:41.250 [2024-11-15 11:24:18.526301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.527776] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:41.250 [2024-11-15 11:24:18.546668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.546710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:41.250 [2024-11-15 11:24:18.546725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.923 ms 00:20:41.250 [2024-11-15 11:24:18.546735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.546833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.546848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:41.250 [2024-11-15 11:24:18.546859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:41.250 [2024-11-15 11:24:18.546870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.553504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.553672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.250 [2024-11-15 11:24:18.553694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.602 ms 00:20:41.250 [2024-11-15 11:24:18.553705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.553813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.553827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.250 [2024-11-15 11:24:18.553838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:41.250 [2024-11-15 11:24:18.553848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.553876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.553892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:41.250 [2024-11-15 11:24:18.553903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:41.250 [2024-11-15 11:24:18.553913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.553936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:41.250 [2024-11-15 11:24:18.558726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.558757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.250 [2024-11-15 11:24:18.558770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.804 ms 00:20:41.250 [2024-11-15 11:24:18.558780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.558847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.558860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:41.250 [2024-11-15 11:24:18.558871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:41.250 [2024-11-15 11:24:18.558881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.558901] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:41.250 [2024-11-15 11:24:18.558927] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:41.250 [2024-11-15 11:24:18.558965] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:41.250 [2024-11-15 11:24:18.558983] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:41.250 [2024-11-15 11:24:18.559072] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:41.250 [2024-11-15 11:24:18.559085] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:41.250 [2024-11-15 11:24:18.559098] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:41.250 [2024-11-15 11:24:18.559110] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559137] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:41.250 [2024-11-15 11:24:18.559147] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:41.250 [2024-11-15 11:24:18.559158] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:41.250 [2024-11-15 11:24:18.559167] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:41.250 [2024-11-15 11:24:18.559177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.559188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:41.250 [2024-11-15 11:24:18.559198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:41.250 [2024-11-15 11:24:18.559208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.559284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.250 [2024-11-15 11:24:18.559298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:41.250 [2024-11-15 11:24:18.559308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:41.250 [2024-11-15 11:24:18.559318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.250 [2024-11-15 11:24:18.559406] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:41.250 [2024-11-15 11:24:18.559419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:41.250 [2024-11-15 11:24:18.559429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:41.250 [2024-11-15 11:24:18.559459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:41.250 [2024-11-15 11:24:18.559487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.250 [2024-11-15 11:24:18.559506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:41.250 [2024-11-15 11:24:18.559515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:41.250 [2024-11-15 11:24:18.559526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.250 [2024-11-15 11:24:18.559546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:41.250 [2024-11-15 11:24:18.559577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:41.250 [2024-11-15 11:24:18.559588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:41.250 [2024-11-15 11:24:18.559606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:41.250 [2024-11-15 11:24:18.559634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:41.250 [2024-11-15 11:24:18.559662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:41.250 [2024-11-15 11:24:18.559689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:41.250 [2024-11-15 11:24:18.559715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.250 [2024-11-15 11:24:18.559733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:41.250 [2024-11-15 11:24:18.559742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:41.250 [2024-11-15 11:24:18.559750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.250 [2024-11-15 11:24:18.559759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:41.250 [2024-11-15 11:24:18.559768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:41.250 [2024-11-15 11:24:18.559777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.250 [2024-11-15 11:24:18.559786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:41.250 [2024-11-15 11:24:18.559795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:41.251 [2024-11-15 11:24:18.559803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.251 [2024-11-15 11:24:18.559812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:41.251 [2024-11-15 11:24:18.559821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:41.251 [2024-11-15 11:24:18.559831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.251 [2024-11-15 11:24:18.559840] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:41.251 [2024-11-15 11:24:18.559850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:41.251 [2024-11-15 11:24:18.559860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.251 [2024-11-15 11:24:18.559873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.251 [2024-11-15 11:24:18.559883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:41.251 [2024-11-15 11:24:18.559893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:41.251 [2024-11-15 11:24:18.559902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:41.251 [2024-11-15 11:24:18.559911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:41.251 [2024-11-15 11:24:18.559920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:41.251 [2024-11-15 11:24:18.559929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:41.251 [2024-11-15 11:24:18.559940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:41.251 [2024-11-15 11:24:18.559952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.251 [2024-11-15 11:24:18.559964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:41.251 [2024-11-15 11:24:18.559974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:41.251 [2024-11-15 11:24:18.559985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:41.251 [2024-11-15 11:24:18.559996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:41.251 [2024-11-15 11:24:18.560006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:41.251 [2024-11-15 11:24:18.560017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:41.251 [2024-11-15 11:24:18.560027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:41.251 [2024-11-15 11:24:18.560037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:41.251 [2024-11-15 11:24:18.560047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:41.251 [2024-11-15 11:24:18.560056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:41.251 [2024-11-15 11:24:18.560067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:41.251 [2024-11-15 11:24:18.560076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:41.251 [2024-11-15 11:24:18.560086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:41.251 [2024-11-15 11:24:18.560097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:41.251 [2024-11-15 11:24:18.560107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:41.251 [2024-11-15 11:24:18.560118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.251 [2024-11-15 11:24:18.560129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:41.251 [2024-11-15 11:24:18.560139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:41.251 [2024-11-15 11:24:18.560150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:41.251 [2024-11-15 11:24:18.560160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:41.251 [2024-11-15 11:24:18.560171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.251 [2024-11-15 11:24:18.560181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:41.251 [2024-11-15 11:24:18.560196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:20:41.251 [2024-11-15 11:24:18.560206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.251 [2024-11-15 11:24:18.598818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.251 [2024-11-15 11:24:18.598966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.251 [2024-11-15 11:24:18.599104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.620 ms 00:20:41.251 [2024-11-15 11:24:18.599142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.251 [2024-11-15 11:24:18.599290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.251 [2024-11-15 11:24:18.599454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:41.251 [2024-11-15 11:24:18.599539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:41.251 [2024-11-15 11:24:18.599587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.510 [2024-11-15 11:24:18.658091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.510 [2024-11-15 11:24:18.658260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.510 [2024-11-15 11:24:18.658348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.549 ms 00:20:41.510 [2024-11-15 11:24:18.658391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.658514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.658551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.511 [2024-11-15 11:24:18.658657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:41.511 [2024-11-15 11:24:18.658692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.659154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.659253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.511 [2024-11-15 11:24:18.659353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:41.511 [2024-11-15 11:24:18.659394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.659548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.659608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.511 [2024-11-15 11:24:18.659682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:41.511 [2024-11-15 11:24:18.659716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.680475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.680617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.511 [2024-11-15 11:24:18.680694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.745 ms 00:20:41.511 [2024-11-15 11:24:18.680731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.700854] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:41.511 [2024-11-15 11:24:18.701006] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:41.511 [2024-11-15 11:24:18.701097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.701130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:41.511 [2024-11-15 11:24:18.701161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.264 ms 00:20:41.511 [2024-11-15 11:24:18.701190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.730830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.730970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:41.511 [2024-11-15 11:24:18.731093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.593 ms 00:20:41.511 [2024-11-15 11:24:18.731130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.749868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.749995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:41.511 [2024-11-15 11:24:18.750065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.639 ms 00:20:41.511 [2024-11-15 11:24:18.750100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.767998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.768139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:41.511 [2024-11-15 11:24:18.768272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.824 ms 00:20:41.511 [2024-11-15 11:24:18.768308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.769061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.769183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:41.511 [2024-11-15 11:24:18.769258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:20:41.511 [2024-11-15 11:24:18.769292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.854382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.854596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.511 [2024-11-15 11:24:18.854755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.173 ms 00:20:41.511 [2024-11-15 11:24:18.854774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.865520] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:41.511 [2024-11-15 11:24:18.881394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.881438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.511 [2024-11-15 11:24:18.881453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.535 ms 00:20:41.511 [2024-11-15 11:24:18.881469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.881604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.881618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.511 [2024-11-15 11:24:18.881630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:41.511 [2024-11-15 11:24:18.881640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.881697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.881707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.511 [2024-11-15 11:24:18.881719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:41.511 [2024-11-15 11:24:18.881728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.881767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.881781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.511 [2024-11-15 11:24:18.881792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:41.511 [2024-11-15 11:24:18.881801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.511 [2024-11-15 11:24:18.881838] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.511 [2024-11-15 11:24:18.881851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.511 [2024-11-15 11:24:18.881861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.511 [2024-11-15 11:24:18.881871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:41.511 [2024-11-15 11:24:18.881881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-15 11:24:18.918021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-15 11:24:18.918060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.770 [2024-11-15 11:24:18.918075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.176 ms 00:20:41.770 [2024-11-15 11:24:18.918086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-15 11:24:18.918209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.770 [2024-11-15 11:24:18.918224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.770 [2024-11-15 11:24:18.918235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:41.770 [2024-11-15 11:24:18.918245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.770 [2024-11-15 11:24:18.919160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.770 [2024-11-15 11:24:18.923143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.540 ms, result 0 00:20:41.770 [2024-11-15 11:24:18.923999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.770 [2024-11-15 11:24:18.942485] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.706  [2024-11-15T11:24:21.042Z] Copying: 31/256 [MB] (31 MBps) [2024-11-15T11:24:21.977Z] Copying: 58/256 [MB] (27 MBps) [2024-11-15T11:24:23.352Z] Copying: 86/256 [MB] (27 MBps) [2024-11-15T11:24:24.304Z] Copying: 113/256 [MB] (27 MBps) [2024-11-15T11:24:25.240Z] Copying: 141/256 [MB] (27 MBps) [2024-11-15T11:24:26.190Z] Copying: 168/256 [MB] (26 MBps) [2024-11-15T11:24:27.124Z] Copying: 193/256 [MB] (25 MBps) [2024-11-15T11:24:28.058Z] Copying: 220/256 [MB] (27 MBps) [2024-11-15T11:24:28.317Z] Copying: 246/256 [MB] (26 MBps) [2024-11-15T11:24:28.317Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-15 11:24:28.277328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:50.916 [2024-11-15 11:24:28.291938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.916 [2024-11-15 11:24:28.291980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:50.916 [2024-11-15 11:24:28.291996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.916 [2024-11-15 11:24:28.292012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.916 [2024-11-15 11:24:28.292035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:50.916 [2024-11-15 11:24:28.295997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.916 [2024-11-15 11:24:28.296139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:50.916 [2024-11-15 11:24:28.296159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:20:50.916 [2024-11-15 11:24:28.296170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.916 [2024-11-15 11:24:28.296397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.916 [2024-11-15 11:24:28.296410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:50.916 [2024-11-15 11:24:28.296421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:20:50.916 [2024-11-15 11:24:28.296431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.916 [2024-11-15 11:24:28.299438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.916 [2024-11-15 11:24:28.299572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:50.916 [2024-11-15 11:24:28.299695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:20:50.916 [2024-11-15 11:24:28.299734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.916 [2024-11-15 11:24:28.305438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.916 [2024-11-15 11:24:28.305592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:50.916 [2024-11-15 11:24:28.305615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.666 ms 00:20:50.916 [2024-11-15 11:24:28.305628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.341665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.175 [2024-11-15 11:24:28.341702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:51.175 [2024-11-15 11:24:28.341715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.023 ms 00:20:51.175 [2024-11-15 11:24:28.341726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.362850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.175 [2024-11-15 11:24:28.363000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:51.175 [2024-11-15 11:24:28.363029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.105 ms 00:20:51.175 [2024-11-15 11:24:28.363040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.363175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.175 [2024-11-15 11:24:28.363188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:51.175 [2024-11-15 11:24:28.363199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:51.175 [2024-11-15 11:24:28.363209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.399185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.175 [2024-11-15 11:24:28.399226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:51.175 [2024-11-15 11:24:28.399242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.002 ms 00:20:51.175 [2024-11-15 11:24:28.399255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.434247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.175 [2024-11-15 11:24:28.434407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:51.175 [2024-11-15 11:24:28.434429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.003 ms 00:20:51.175 [2024-11-15 11:24:28.434439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.469487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.175 [2024-11-15 11:24:28.469643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:51.175 [2024-11-15 11:24:28.469664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.060 ms 00:20:51.175 [2024-11-15 11:24:28.469676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.504760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.175 [2024-11-15 11:24:28.504894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:51.175 [2024-11-15 11:24:28.504915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.037 ms 00:20:51.175 [2024-11-15 11:24:28.504925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.175 [2024-11-15 11:24:28.504967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:51.175 [2024-11-15 11:24:28.504983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.504996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:51.175 [2024-11-15 11:24:28.505178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.505994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:51.176 [2024-11-15 11:24:28.506139] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:51.176 [2024-11-15 11:24:28.506157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:20:51.176 [2024-11-15 11:24:28.506169] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:51.176 [2024-11-15 11:24:28.506179] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:51.176 [2024-11-15 11:24:28.506189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:51.176 [2024-11-15 11:24:28.506199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:51.176 [2024-11-15 11:24:28.506209] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:51.176 [2024-11-15 11:24:28.506219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:51.176 [2024-11-15 11:24:28.506229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:51.176 [2024-11-15 11:24:28.506238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:51.176 [2024-11-15 11:24:28.506247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:51.177 [2024-11-15 11:24:28.506257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.177 [2024-11-15 11:24:28.506275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:51.177 [2024-11-15 11:24:28.506286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:20:51.177 [2024-11-15 11:24:28.506296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.177 [2024-11-15 11:24:28.525963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.177 [2024-11-15 11:24:28.526103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:51.177 [2024-11-15 11:24:28.526251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:20:51.177 [2024-11-15 11:24:28.526303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.177 [2024-11-15 11:24:28.526859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.177 [2024-11-15 11:24:28.526976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:51.177 [2024-11-15 11:24:28.527052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:20:51.177 [2024-11-15 11:24:28.527087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.581051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.581193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.435 [2024-11-15 11:24:28.581314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.581356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.581510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.581627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.435 [2024-11-15 11:24:28.581672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.581707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.581837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.581915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.435 [2024-11-15 11:24:28.581982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.582021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.582071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.582268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.435 [2024-11-15 11:24:28.582310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.582349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.705156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.705394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.435 [2024-11-15 11:24:28.705550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.705577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.803186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.803250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.435 [2024-11-15 11:24:28.803268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.803282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.803367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.803381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.435 [2024-11-15 11:24:28.803395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.803408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.803440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.803454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.435 [2024-11-15 11:24:28.803478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.803490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.803637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.803654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.435 [2024-11-15 11:24:28.803668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.803680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.803724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.803739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:51.435 [2024-11-15 11:24:28.803752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.803774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.803818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.803832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.435 [2024-11-15 11:24:28.803845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.803857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.803907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.435 [2024-11-15 11:24:28.803921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.435 [2024-11-15 11:24:28.803941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.435 [2024-11-15 11:24:28.803954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.435 [2024-11-15 11:24:28.804123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 513.003 ms, result 0 00:20:52.810 00:20:52.810 00:20:52.810 11:24:29 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:52.810 11:24:29 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:53.070 11:24:30 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.070 [2024-11-15 11:24:30.427119] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:20:53.070 [2024-11-15 11:24:30.427403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75988 ] 00:20:53.329 [2024-11-15 11:24:30.607896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.329 [2024-11-15 11:24:30.722452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.898 [2024-11-15 11:24:31.080344] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:53.898 [2024-11-15 11:24:31.080420] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:53.898 [2024-11-15 11:24:31.241904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.241970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:53.898 [2024-11-15 11:24:31.241987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:53.898 [2024-11-15 11:24:31.241998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.245119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.245159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.898 [2024-11-15 11:24:31.245172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:20:53.898 [2024-11-15 11:24:31.245182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.245276] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:53.898 [2024-11-15 11:24:31.246199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:53.898 [2024-11-15 11:24:31.246234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.246246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.898 [2024-11-15 11:24:31.246257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:20:53.898 [2024-11-15 11:24:31.246267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.247751] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:53.898 [2024-11-15 11:24:31.267469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.267516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:53.898 [2024-11-15 11:24:31.267530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.750 ms 00:20:53.898 [2024-11-15 11:24:31.267542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.267652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.267667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:53.898 [2024-11-15 11:24:31.267679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:53.898 [2024-11-15 11:24:31.267690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.274369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.274518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.898 [2024-11-15 11:24:31.274555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:20:53.898 [2024-11-15 11:24:31.274567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.274696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.274712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.898 [2024-11-15 11:24:31.274724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:53.898 [2024-11-15 11:24:31.274735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.274768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.274784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:53.898 [2024-11-15 11:24:31.274796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:53.898 [2024-11-15 11:24:31.274807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.274832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:53.898 [2024-11-15 11:24:31.279578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.279610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.898 [2024-11-15 11:24:31.279622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:20:53.898 [2024-11-15 11:24:31.279633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.279701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.898 [2024-11-15 11:24:31.279713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:53.898 [2024-11-15 11:24:31.279724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:53.898 [2024-11-15 11:24:31.279735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.898 [2024-11-15 11:24:31.279755] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:53.898 [2024-11-15 11:24:31.279782] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:53.898 [2024-11-15 11:24:31.279818] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:53.898 [2024-11-15 11:24:31.279837] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:53.899 [2024-11-15 11:24:31.279928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:53.899 [2024-11-15 11:24:31.279941] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:53.899 [2024-11-15 11:24:31.279954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:53.899 [2024-11-15 11:24:31.279967] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:53.899 [2024-11-15 11:24:31.279983] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:53.899 [2024-11-15 11:24:31.279994] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:53.899 [2024-11-15 11:24:31.280004] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:53.899 [2024-11-15 11:24:31.280013] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:53.899 [2024-11-15 11:24:31.280023] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:53.899 [2024-11-15 11:24:31.280035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.899 [2024-11-15 11:24:31.280046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:53.899 [2024-11-15 11:24:31.280056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:53.899 [2024-11-15 11:24:31.280066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.899 [2024-11-15 11:24:31.280143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.899 [2024-11-15 11:24:31.280157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:53.899 [2024-11-15 11:24:31.280168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:53.899 [2024-11-15 11:24:31.280178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.899 [2024-11-15 11:24:31.280265] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:53.899 [2024-11-15 11:24:31.280278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:53.899 [2024-11-15 11:24:31.280290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:53.899 [2024-11-15 11:24:31.280320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:53.899 [2024-11-15 11:24:31.280349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.899 [2024-11-15 11:24:31.280367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:53.899 [2024-11-15 11:24:31.280376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:53.899 [2024-11-15 11:24:31.280385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.899 [2024-11-15 11:24:31.280405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:53.899 [2024-11-15 11:24:31.280415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:53.899 [2024-11-15 11:24:31.280424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:53.899 [2024-11-15 11:24:31.280443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:53.899 [2024-11-15 11:24:31.280470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:53.899 [2024-11-15 11:24:31.280498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:53.899 [2024-11-15 11:24:31.280525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:53.899 [2024-11-15 11:24:31.280553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:53.899 [2024-11-15 11:24:31.280593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.899 [2024-11-15 11:24:31.280611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:53.899 [2024-11-15 11:24:31.280621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:53.899 [2024-11-15 11:24:31.280630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.899 [2024-11-15 11:24:31.280640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:53.899 [2024-11-15 11:24:31.280649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:53.899 [2024-11-15 11:24:31.280658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:53.899 [2024-11-15 11:24:31.280677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:53.899 [2024-11-15 11:24:31.280687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280696] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:53.899 [2024-11-15 11:24:31.280706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:53.899 [2024-11-15 11:24:31.280716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.899 [2024-11-15 11:24:31.280740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:53.899 [2024-11-15 11:24:31.280749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:53.899 [2024-11-15 11:24:31.280759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:53.899 [2024-11-15 11:24:31.280768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:53.899 [2024-11-15 11:24:31.280778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:53.899 [2024-11-15 11:24:31.280787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:53.899 [2024-11-15 11:24:31.280798] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:53.899 [2024-11-15 11:24:31.280809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.899 [2024-11-15 11:24:31.280822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:53.899 [2024-11-15 11:24:31.280832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:53.899 [2024-11-15 11:24:31.280843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:53.899 [2024-11-15 11:24:31.280854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:53.899 [2024-11-15 11:24:31.280864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:53.899 [2024-11-15 11:24:31.280875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:53.899 [2024-11-15 11:24:31.280885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:53.899 [2024-11-15 11:24:31.280896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:53.899 [2024-11-15 11:24:31.280906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:53.899 [2024-11-15 11:24:31.280917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:53.899 [2024-11-15 11:24:31.280927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:53.899 [2024-11-15 11:24:31.280937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:53.899 [2024-11-15 11:24:31.280947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:53.899 [2024-11-15 11:24:31.280958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:53.899 [2024-11-15 11:24:31.280969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:53.899 [2024-11-15 11:24:31.280979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.899 [2024-11-15 11:24:31.280990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:53.899 [2024-11-15 11:24:31.281000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:53.899 [2024-11-15 11:24:31.281010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:53.899 [2024-11-15 11:24:31.281022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:53.899 [2024-11-15 11:24:31.281033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.899 [2024-11-15 11:24:31.281043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:53.899 [2024-11-15 11:24:31.281057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:20:53.899 [2024-11-15 11:24:31.281066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.320591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.320631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.159 [2024-11-15 11:24:31.320645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.532 ms 00:20:54.159 [2024-11-15 11:24:31.320656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.320788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.320806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.159 [2024-11-15 11:24:31.320817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:54.159 [2024-11-15 11:24:31.320827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.378759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.378797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.159 [2024-11-15 11:24:31.378813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.001 ms 00:20:54.159 [2024-11-15 11:24:31.378827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.378932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.378945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.159 [2024-11-15 11:24:31.378957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.159 [2024-11-15 11:24:31.378967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.379398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.379413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.159 [2024-11-15 11:24:31.379424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:54.159 [2024-11-15 11:24:31.379438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.379579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.379593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.159 [2024-11-15 11:24:31.379605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:54.159 [2024-11-15 11:24:31.379615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.399356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.399508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.159 [2024-11-15 11:24:31.399531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.749 ms 00:20:54.159 [2024-11-15 11:24:31.399542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.419000] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:54.159 [2024-11-15 11:24:31.419038] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.159 [2024-11-15 11:24:31.419054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.419065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.159 [2024-11-15 11:24:31.419077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.400 ms 00:20:54.159 [2024-11-15 11:24:31.419088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.448905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.448966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.159 [2024-11-15 11:24:31.448981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.785 ms 00:20:54.159 [2024-11-15 11:24:31.448992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.467063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.467098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.159 [2024-11-15 11:24:31.467111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.015 ms 00:20:54.159 [2024-11-15 11:24:31.467122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.159 [2024-11-15 11:24:31.484893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.159 [2024-11-15 11:24:31.485028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.159 [2024-11-15 11:24:31.485048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.724 ms 00:20:54.159 [2024-11-15 11:24:31.485059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.160 [2024-11-15 11:24:31.485877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.160 [2024-11-15 11:24:31.485901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.160 [2024-11-15 11:24:31.485913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:20:54.160 [2024-11-15 11:24:31.485923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.572446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.572690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.419 [2024-11-15 11:24:31.572719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.633 ms 00:20:54.419 [2024-11-15 11:24:31.572731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.583786] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:54.419 [2024-11-15 11:24:31.599720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.599767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:54.419 [2024-11-15 11:24:31.599785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.928 ms 00:20:54.419 [2024-11-15 11:24:31.599802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.599928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.599942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:54.419 [2024-11-15 11:24:31.599953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:54.419 [2024-11-15 11:24:31.599963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.600022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.600033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.419 [2024-11-15 11:24:31.600044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:54.419 [2024-11-15 11:24:31.600054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.600095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.600108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.419 [2024-11-15 11:24:31.600119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:54.419 [2024-11-15 11:24:31.600129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.600166] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:54.419 [2024-11-15 11:24:31.600179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.600189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:54.419 [2024-11-15 11:24:31.600199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:54.419 [2024-11-15 11:24:31.600209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.637773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.637814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.419 [2024-11-15 11:24:31.637828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.604 ms 00:20:54.419 [2024-11-15 11:24:31.637839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.637956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.419 [2024-11-15 11:24:31.637971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.419 [2024-11-15 11:24:31.637982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:54.419 [2024-11-15 11:24:31.637992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.419 [2024-11-15 11:24:31.638943] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.419 [2024-11-15 11:24:31.643336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.381 ms, result 0 00:20:54.419 [2024-11-15 11:24:31.644276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.419 [2024-11-15 11:24:31.663255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.419  [2024-11-15T11:24:31.820Z] Copying: 4096/4096 [kB] (average 26 MBps)[2024-11-15 11:24:31.818207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.680 [2024-11-15 11:24:31.832571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.832707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.680 [2024-11-15 11:24:31.832729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.680 [2024-11-15 11:24:31.832747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.832778] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:54.680 [2024-11-15 11:24:31.837088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.837116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.680 [2024-11-15 11:24:31.837128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:20:54.680 [2024-11-15 11:24:31.837138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.838947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.838983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.680 [2024-11-15 11:24:31.838996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 00:20:54.680 [2024-11-15 11:24:31.839006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.842272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.842309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.680 [2024-11-15 11:24:31.842321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:20:54.680 [2024-11-15 11:24:31.842332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.847995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.848127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:54.680 [2024-11-15 11:24:31.848147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.641 ms 00:20:54.680 [2024-11-15 11:24:31.848158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.885013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.885051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.680 [2024-11-15 11:24:31.885064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.847 ms 00:20:54.680 [2024-11-15 11:24:31.885074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.906478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.906521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.680 [2024-11-15 11:24:31.906539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.382 ms 00:20:54.680 [2024-11-15 11:24:31.906550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.906731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.906746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.680 [2024-11-15 11:24:31.906757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:54.680 [2024-11-15 11:24:31.906767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.680 [2024-11-15 11:24:31.942404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.680 [2024-11-15 11:24:31.942541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:54.680 [2024-11-15 11:24:31.942574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.665 ms 00:20:54.681 [2024-11-15 11:24:31.942585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.681 [2024-11-15 11:24:31.978086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.681 [2024-11-15 11:24:31.978225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:54.681 [2024-11-15 11:24:31.978245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.487 ms 00:20:54.681 [2024-11-15 11:24:31.978255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.681 [2024-11-15 11:24:32.014257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.681 [2024-11-15 11:24:32.014292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.681 [2024-11-15 11:24:32.014305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.004 ms 00:20:54.681 [2024-11-15 11:24:32.014315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.681 [2024-11-15 11:24:32.050516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.681 [2024-11-15 11:24:32.050551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.681 [2024-11-15 11:24:32.050575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.157 ms 00:20:54.681 [2024-11-15 11:24:32.050585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.681 [2024-11-15 11:24:32.050639] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.681 [2024-11-15 11:24:32.050656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.050998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.681 [2024-11-15 11:24:32.051475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.682 [2024-11-15 11:24:32.051736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.682 [2024-11-15 11:24:32.051746] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:20:54.682 [2024-11-15 11:24:32.051757] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:54.682 [2024-11-15 11:24:32.051767] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:54.682 [2024-11-15 11:24:32.051776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:54.682 [2024-11-15 11:24:32.051786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:54.682 [2024-11-15 11:24:32.051797] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.682 [2024-11-15 11:24:32.051807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.682 [2024-11-15 11:24:32.051817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.682 [2024-11-15 11:24:32.051826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.682 [2024-11-15 11:24:32.051835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.682 [2024-11-15 11:24:32.051845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.682 [2024-11-15 11:24:32.051859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.682 [2024-11-15 11:24:32.051870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:20:54.682 [2024-11-15 11:24:32.051880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.682 [2024-11-15 11:24:32.072017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.682 [2024-11-15 11:24:32.072050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.682 [2024-11-15 11:24:32.072063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.149 ms 00:20:54.682 [2024-11-15 11:24:32.072073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.682 [2024-11-15 11:24:32.072614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.682 [2024-11-15 11:24:32.072627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.682 [2024-11-15 11:24:32.072638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:20:54.682 [2024-11-15 11:24:32.072648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.941 [2024-11-15 11:24:32.127485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.941 [2024-11-15 11:24:32.127520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.941 [2024-11-15 11:24:32.127534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.941 [2024-11-15 11:24:32.127545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.941 [2024-11-15 11:24:32.127652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.941 [2024-11-15 11:24:32.127664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.941 [2024-11-15 11:24:32.127694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.941 [2024-11-15 11:24:32.127704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.941 [2024-11-15 11:24:32.127757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.941 [2024-11-15 11:24:32.127770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.941 [2024-11-15 11:24:32.127781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.941 [2024-11-15 11:24:32.127792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.941 [2024-11-15 11:24:32.127812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.941 [2024-11-15 11:24:32.127828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.941 [2024-11-15 11:24:32.127838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.941 [2024-11-15 11:24:32.127848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.941 [2024-11-15 11:24:32.252534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.941 [2024-11-15 11:24:32.252600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.941 [2024-11-15 11:24:32.252615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.941 [2024-11-15 11:24:32.252626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.354382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.200 [2024-11-15 11:24:32.354431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.200 [2024-11-15 11:24:32.354445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.200 [2024-11-15 11:24:32.354456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.354553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.200 [2024-11-15 11:24:32.354584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.200 [2024-11-15 11:24:32.354596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.200 [2024-11-15 11:24:32.354606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.354637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.200 [2024-11-15 11:24:32.354649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.200 [2024-11-15 11:24:32.354665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.200 [2024-11-15 11:24:32.354691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.354800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.200 [2024-11-15 11:24:32.354814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.200 [2024-11-15 11:24:32.354825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.200 [2024-11-15 11:24:32.354835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.354873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.200 [2024-11-15 11:24:32.354885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:55.200 [2024-11-15 11:24:32.354900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.200 [2024-11-15 11:24:32.354910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.354951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.200 [2024-11-15 11:24:32.354962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.200 [2024-11-15 11:24:32.354973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.200 [2024-11-15 11:24:32.354982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.355026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.200 [2024-11-15 11:24:32.355038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.200 [2024-11-15 11:24:32.355052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.200 [2024-11-15 11:24:32.355062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.200 [2024-11-15 11:24:32.355201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.475 ms, result 0 00:20:56.136 00:20:56.136 00:20:56.136 11:24:33 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76025 00:20:56.136 11:24:33 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:56.136 11:24:33 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76025 00:20:56.136 11:24:33 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76025 ']' 00:20:56.136 11:24:33 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.136 11:24:33 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:56.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.136 11:24:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.136 11:24:33 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:56.136 11:24:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:56.395 [2024-11-15 11:24:33.538311] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:20:56.395 [2024-11-15 11:24:33.538438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76025 ] 00:20:56.395 [2024-11-15 11:24:33.708582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.654 [2024-11-15 11:24:33.812430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.590 11:24:34 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:57.590 11:24:34 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:57.590 11:24:34 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:57.590 [2024-11-15 11:24:34.892901] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.590 [2024-11-15 11:24:34.892967] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.850 [2024-11-15 11:24:35.057325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.057379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:57.850 [2024-11-15 11:24:35.057402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:57.850 [2024-11-15 11:24:35.057414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.061352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.061391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.850 [2024-11-15 11:24:35.061406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:20:57.850 [2024-11-15 11:24:35.061417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.061533] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:57.850 [2024-11-15 11:24:35.062531] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:57.850 [2024-11-15 11:24:35.062577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.062589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.850 [2024-11-15 11:24:35.062603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:20:57.850 [2024-11-15 11:24:35.062614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.064076] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:57.850 [2024-11-15 11:24:35.083377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.083425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:57.850 [2024-11-15 11:24:35.083440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.335 ms 00:20:57.850 [2024-11-15 11:24:35.083457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.083576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.083594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:57.850 [2024-11-15 11:24:35.083606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:57.850 [2024-11-15 11:24:35.083619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.090261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.090429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.850 [2024-11-15 11:24:35.090450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.601 ms 00:20:57.850 [2024-11-15 11:24:35.090463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.090590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.090607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.850 [2024-11-15 11:24:35.090619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:57.850 [2024-11-15 11:24:35.090632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.090664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.090678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:57.850 [2024-11-15 11:24:35.090689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:57.850 [2024-11-15 11:24:35.090710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.090736] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:57.850 [2024-11-15 11:24:35.095385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.095416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.850 [2024-11-15 11:24:35.095434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:20:57.850 [2024-11-15 11:24:35.095444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.095521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.095534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:57.850 [2024-11-15 11:24:35.095551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.850 [2024-11-15 11:24:35.095581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.095609] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:57.850 [2024-11-15 11:24:35.095633] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:57.850 [2024-11-15 11:24:35.095685] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:57.850 [2024-11-15 11:24:35.095705] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:57.850 [2024-11-15 11:24:35.095801] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:57.850 [2024-11-15 11:24:35.095815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:57.850 [2024-11-15 11:24:35.095842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:57.850 [2024-11-15 11:24:35.095856] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:57.850 [2024-11-15 11:24:35.095874] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:57.850 [2024-11-15 11:24:35.095885] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:57.850 [2024-11-15 11:24:35.095905] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:57.850 [2024-11-15 11:24:35.095914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:57.850 [2024-11-15 11:24:35.095934] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:57.850 [2024-11-15 11:24:35.095944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.095960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:57.850 [2024-11-15 11:24:35.095971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:20:57.850 [2024-11-15 11:24:35.095986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.096066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.850 [2024-11-15 11:24:35.096083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:57.850 [2024-11-15 11:24:35.096094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:57.850 [2024-11-15 11:24:35.096110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.850 [2024-11-15 11:24:35.096208] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:57.850 [2024-11-15 11:24:35.096228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:57.850 [2024-11-15 11:24:35.096239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.850 [2024-11-15 11:24:35.096255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.850 [2024-11-15 11:24:35.096266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:57.850 [2024-11-15 11:24:35.096281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:57.850 [2024-11-15 11:24:35.096291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:57.850 [2024-11-15 11:24:35.096312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:57.850 [2024-11-15 11:24:35.096322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:57.850 [2024-11-15 11:24:35.096337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.851 [2024-11-15 11:24:35.096346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:57.851 [2024-11-15 11:24:35.096362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:57.851 [2024-11-15 11:24:35.096372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.851 [2024-11-15 11:24:35.096387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:57.851 [2024-11-15 11:24:35.096397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:57.851 [2024-11-15 11:24:35.096412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:57.851 [2024-11-15 11:24:35.096435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:57.851 [2024-11-15 11:24:35.096446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:57.851 [2024-11-15 11:24:35.096480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.851 [2024-11-15 11:24:35.096504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:57.851 [2024-11-15 11:24:35.096523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.851 [2024-11-15 11:24:35.096546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:57.851 [2024-11-15 11:24:35.096567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.851 [2024-11-15 11:24:35.096593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:57.851 [2024-11-15 11:24:35.096607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.851 [2024-11-15 11:24:35.096633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:57.851 [2024-11-15 11:24:35.096643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.851 [2024-11-15 11:24:35.096667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:57.851 [2024-11-15 11:24:35.096681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:57.851 [2024-11-15 11:24:35.096690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.851 [2024-11-15 11:24:35.096704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:57.851 [2024-11-15 11:24:35.096714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:57.851 [2024-11-15 11:24:35.096733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:57.851 [2024-11-15 11:24:35.096757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:57.851 [2024-11-15 11:24:35.096766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:57.851 [2024-11-15 11:24:35.096796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:57.851 [2024-11-15 11:24:35.096811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.851 [2024-11-15 11:24:35.096821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.851 [2024-11-15 11:24:35.096837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:57.851 [2024-11-15 11:24:35.096847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:57.851 [2024-11-15 11:24:35.096860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:57.851 [2024-11-15 11:24:35.096870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:57.851 [2024-11-15 11:24:35.096884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:57.851 [2024-11-15 11:24:35.096894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:57.851 [2024-11-15 11:24:35.096909] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:57.851 [2024-11-15 11:24:35.096922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.851 [2024-11-15 11:24:35.096944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:57.851 [2024-11-15 11:24:35.096955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:57.851 [2024-11-15 11:24:35.096970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:57.851 [2024-11-15 11:24:35.096981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:57.851 [2024-11-15 11:24:35.096996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:57.851 [2024-11-15 11:24:35.097007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:57.851 [2024-11-15 11:24:35.097022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:57.851 [2024-11-15 11:24:35.097033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:57.851 [2024-11-15 11:24:35.097049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:57.851 [2024-11-15 11:24:35.097059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:57.851 [2024-11-15 11:24:35.097074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:57.851 [2024-11-15 11:24:35.097086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:57.851 [2024-11-15 11:24:35.097101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:57.851 [2024-11-15 11:24:35.097112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:57.851 [2024-11-15 11:24:35.097127] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:57.851 [2024-11-15 11:24:35.097139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.851 [2024-11-15 11:24:35.097159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:57.851 [2024-11-15 11:24:35.097170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:57.851 [2024-11-15 11:24:35.097186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:57.851 [2024-11-15 11:24:35.097197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:57.851 [2024-11-15 11:24:35.097213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.097225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:57.851 [2024-11-15 11:24:35.097240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:20:57.851 [2024-11-15 11:24:35.097251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.138634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.138674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.851 [2024-11-15 11:24:35.138694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.378 ms 00:20:57.851 [2024-11-15 11:24:35.138710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.138840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.138853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:57.851 [2024-11-15 11:24:35.138869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:57.851 [2024-11-15 11:24:35.138879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.188386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.188429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.851 [2024-11-15 11:24:35.188449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.555 ms 00:20:57.851 [2024-11-15 11:24:35.188460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.188579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.188593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.851 [2024-11-15 11:24:35.188622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:57.851 [2024-11-15 11:24:35.188633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.189067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.189085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.851 [2024-11-15 11:24:35.189107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:20:57.851 [2024-11-15 11:24:35.189117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.189242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.189256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.851 [2024-11-15 11:24:35.189272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:57.851 [2024-11-15 11:24:35.189282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.211440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.851 [2024-11-15 11:24:35.211475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.851 [2024-11-15 11:24:35.211495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.161 ms 00:20:57.851 [2024-11-15 11:24:35.211506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.851 [2024-11-15 11:24:35.246214] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:57.852 [2024-11-15 11:24:35.246251] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:57.852 [2024-11-15 11:24:35.246274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.852 [2024-11-15 11:24:35.246286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:57.852 [2024-11-15 11:24:35.246303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.673 ms 00:20:57.852 [2024-11-15 11:24:35.246313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.275935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.276077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:58.111 [2024-11-15 11:24:35.276109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.580 ms 00:20:58.111 [2024-11-15 11:24:35.276121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.294401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.294438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:58.111 [2024-11-15 11:24:35.294462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.217 ms 00:20:58.111 [2024-11-15 11:24:35.294473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.312765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.312800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:58.111 [2024-11-15 11:24:35.312819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.210 ms 00:20:58.111 [2024-11-15 11:24:35.312829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.313541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.313581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:58.111 [2024-11-15 11:24:35.313597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:20:58.111 [2024-11-15 11:24:35.313608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.400370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.400424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:58.111 [2024-11-15 11:24:35.400448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.864 ms 00:20:58.111 [2024-11-15 11:24:35.400460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.411548] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:58.111 [2024-11-15 11:24:35.427790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.427853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:58.111 [2024-11-15 11:24:35.427876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.205 ms 00:20:58.111 [2024-11-15 11:24:35.427891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.428006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.428024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:58.111 [2024-11-15 11:24:35.428037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:58.111 [2024-11-15 11:24:35.428052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.428108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.428124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:58.111 [2024-11-15 11:24:35.428136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:58.111 [2024-11-15 11:24:35.428157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.428184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.428200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:58.111 [2024-11-15 11:24:35.428211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.111 [2024-11-15 11:24:35.428226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.428271] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:58.111 [2024-11-15 11:24:35.428294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.428304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:58.111 [2024-11-15 11:24:35.428326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:58.111 [2024-11-15 11:24:35.428336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.464522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.464673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:58.111 [2024-11-15 11:24:35.464710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.204 ms 00:20:58.111 [2024-11-15 11:24:35.464723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.111 [2024-11-15 11:24:35.464850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.111 [2024-11-15 11:24:35.464864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:58.111 [2024-11-15 11:24:35.464880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:58.111 [2024-11-15 11:24:35.464897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.112 [2024-11-15 11:24:35.465857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.112 [2024-11-15 11:24:35.469913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.833 ms, result 0 00:20:58.112 [2024-11-15 11:24:35.471392] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.112 Some configs were skipped because the RPC state that can call them passed over. 00:20:58.371 11:24:35 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:58.371 [2024-11-15 11:24:35.711116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.371 [2024-11-15 11:24:35.711177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:58.371 [2024-11-15 11:24:35.711195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:20:58.371 [2024-11-15 11:24:35.711209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.371 [2024-11-15 11:24:35.711245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.842 ms, result 0 00:20:58.371 true 00:20:58.371 11:24:35 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:58.632 [2024-11-15 11:24:35.926494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.632 [2024-11-15 11:24:35.926546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:58.632 [2024-11-15 11:24:35.926595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:20:58.632 [2024-11-15 11:24:35.926608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.632 [2024-11-15 11:24:35.926674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.411 ms, result 0 00:20:58.632 true 00:20:58.632 11:24:35 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76025 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76025 ']' 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76025 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76025 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76025' 00:20:58.632 killing process with pid 76025 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76025 00:20:58.632 11:24:35 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76025 00:21:00.009 [2024-11-15 11:24:37.106017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.106278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:00.009 [2024-11-15 11:24:37.106306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:00.009 [2024-11-15 11:24:37.106320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.106379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:00.009 [2024-11-15 11:24:37.110618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.110651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:00.009 [2024-11-15 11:24:37.110670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:21:00.009 [2024-11-15 11:24:37.110680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.110933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.110946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:00.009 [2024-11-15 11:24:37.110959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:21:00.009 [2024-11-15 11:24:37.110970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.114252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.114289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.009 [2024-11-15 11:24:37.114307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:21:00.009 [2024-11-15 11:24:37.114318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.119967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.120000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.009 [2024-11-15 11:24:37.120015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.617 ms 00:21:00.009 [2024-11-15 11:24:37.120025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.135172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.135208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.009 [2024-11-15 11:24:37.135229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.108 ms 00:21:00.009 [2024-11-15 11:24:37.135249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.146052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.146092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.009 [2024-11-15 11:24:37.146108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.749 ms 00:21:00.009 [2024-11-15 11:24:37.146119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.146272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.146287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.009 [2024-11-15 11:24:37.146301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:00.009 [2024-11-15 11:24:37.146311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.162188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.162222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.009 [2024-11-15 11:24:37.162237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.879 ms 00:21:00.009 [2024-11-15 11:24:37.162247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.177835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.177868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.009 [2024-11-15 11:24:37.177887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.559 ms 00:21:00.009 [2024-11-15 11:24:37.177897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.192822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.192961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.009 [2024-11-15 11:24:37.192989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.896 ms 00:21:00.009 [2024-11-15 11:24:37.192999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.207581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.009 [2024-11-15 11:24:37.207714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.009 [2024-11-15 11:24:37.207739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.504 ms 00:21:00.009 [2024-11-15 11:24:37.207749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.009 [2024-11-15 11:24:37.207826] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.009 [2024-11-15 11:24:37.207844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.207994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.009 [2024-11-15 11:24:37.208119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.208991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.010 [2024-11-15 11:24:37.209197] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.011 [2024-11-15 11:24:37.209225] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:21:00.011 [2024-11-15 11:24:37.209248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.011 [2024-11-15 11:24:37.209269] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.011 [2024-11-15 11:24:37.209279] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.011 [2024-11-15 11:24:37.209294] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.011 [2024-11-15 11:24:37.209304] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.011 [2024-11-15 11:24:37.209319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.011 [2024-11-15 11:24:37.209330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.011 [2024-11-15 11:24:37.209344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.011 [2024-11-15 11:24:37.209353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.011 [2024-11-15 11:24:37.209367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.011 [2024-11-15 11:24:37.209378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.011 [2024-11-15 11:24:37.209394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:21:00.011 [2024-11-15 11:24:37.209404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.011 [2024-11-15 11:24:37.229350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.011 [2024-11-15 11:24:37.229383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.011 [2024-11-15 11:24:37.229406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.940 ms 00:21:00.011 [2024-11-15 11:24:37.229417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.011 [2024-11-15 11:24:37.229991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.011 [2024-11-15 11:24:37.230015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.011 [2024-11-15 11:24:37.230032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:21:00.011 [2024-11-15 11:24:37.230048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.011 [2024-11-15 11:24:37.299632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.011 [2024-11-15 11:24:37.299671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.011 [2024-11-15 11:24:37.299691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.011 [2024-11-15 11:24:37.299702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.011 [2024-11-15 11:24:37.299799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.011 [2024-11-15 11:24:37.299812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.011 [2024-11-15 11:24:37.299828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.011 [2024-11-15 11:24:37.299845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.011 [2024-11-15 11:24:37.299903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.011 [2024-11-15 11:24:37.299916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.011 [2024-11-15 11:24:37.299936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.011 [2024-11-15 11:24:37.299946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.011 [2024-11-15 11:24:37.299971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.011 [2024-11-15 11:24:37.299982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.011 [2024-11-15 11:24:37.299998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.011 [2024-11-15 11:24:37.300008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.426171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.426233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.270 [2024-11-15 11:24:37.426256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.426267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.527775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.527825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.270 [2024-11-15 11:24:37.527843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.527857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.527972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.527985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.270 [2024-11-15 11:24:37.528001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.528012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.528045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.528056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.270 [2024-11-15 11:24:37.528069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.528079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.528199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.528213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.270 [2024-11-15 11:24:37.528226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.528236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.528278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.528291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.270 [2024-11-15 11:24:37.528304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.528314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.528361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.528373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.270 [2024-11-15 11:24:37.528390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.528400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.528446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.270 [2024-11-15 11:24:37.528458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.270 [2024-11-15 11:24:37.528472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.270 [2024-11-15 11:24:37.528482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.270 [2024-11-15 11:24:37.528651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 423.290 ms, result 0 00:21:01.203 11:24:38 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:01.462 [2024-11-15 11:24:38.670481] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:21:01.462 [2024-11-15 11:24:38.670618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76091 ] 00:21:01.462 [2024-11-15 11:24:38.850540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.720 [2024-11-15 11:24:38.964645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.979 [2024-11-15 11:24:39.364982] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.979 [2024-11-15 11:24:39.365059] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.238 [2024-11-15 11:24:39.527278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.527327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:02.238 [2024-11-15 11:24:39.527343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.238 [2024-11-15 11:24:39.527354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.530525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.530570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.238 [2024-11-15 11:24:39.530583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:21:02.238 [2024-11-15 11:24:39.530593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.530693] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:02.238 [2024-11-15 11:24:39.531803] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:02.238 [2024-11-15 11:24:39.531837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.531849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.238 [2024-11-15 11:24:39.531860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:21:02.238 [2024-11-15 11:24:39.531870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.533337] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:02.238 [2024-11-15 11:24:39.553321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.553363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:02.238 [2024-11-15 11:24:39.553378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.016 ms 00:21:02.238 [2024-11-15 11:24:39.553388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.553490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.553505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:02.238 [2024-11-15 11:24:39.553517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:02.238 [2024-11-15 11:24:39.553527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.560218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.560247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.238 [2024-11-15 11:24:39.560260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.639 ms 00:21:02.238 [2024-11-15 11:24:39.560270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.560367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.560381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.238 [2024-11-15 11:24:39.560392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:02.238 [2024-11-15 11:24:39.560402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.560433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.560448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.238 [2024-11-15 11:24:39.560459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:02.238 [2024-11-15 11:24:39.560469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.560493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:02.238 [2024-11-15 11:24:39.565291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.565339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.238 [2024-11-15 11:24:39.565353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:21:02.238 [2024-11-15 11:24:39.565363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.565429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.565442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.238 [2024-11-15 11:24:39.565453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:02.238 [2024-11-15 11:24:39.565463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.565483] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:02.238 [2024-11-15 11:24:39.565509] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:02.238 [2024-11-15 11:24:39.565545] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:02.238 [2024-11-15 11:24:39.565577] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:02.238 [2024-11-15 11:24:39.565668] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:02.238 [2024-11-15 11:24:39.565681] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.238 [2024-11-15 11:24:39.565695] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:02.238 [2024-11-15 11:24:39.565708] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.238 [2024-11-15 11:24:39.565724] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.238 [2024-11-15 11:24:39.565735] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:02.238 [2024-11-15 11:24:39.565745] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.238 [2024-11-15 11:24:39.565756] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:02.238 [2024-11-15 11:24:39.565766] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:02.238 [2024-11-15 11:24:39.565776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.565786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.238 [2024-11-15 11:24:39.565797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:21:02.238 [2024-11-15 11:24:39.565806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.565883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.238 [2024-11-15 11:24:39.565898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.238 [2024-11-15 11:24:39.565908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:02.238 [2024-11-15 11:24:39.565918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.238 [2024-11-15 11:24:39.566006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.238 [2024-11-15 11:24:39.566018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.238 [2024-11-15 11:24:39.566029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.238 [2024-11-15 11:24:39.566039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.238 [2024-11-15 11:24:39.566060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:02.238 [2024-11-15 11:24:39.566078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.238 [2024-11-15 11:24:39.566087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.238 [2024-11-15 11:24:39.566106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.238 [2024-11-15 11:24:39.566116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:02.238 [2024-11-15 11:24:39.566124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.238 [2024-11-15 11:24:39.566144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.238 [2024-11-15 11:24:39.566164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:02.238 [2024-11-15 11:24:39.566174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.238 [2024-11-15 11:24:39.566193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:02.238 [2024-11-15 11:24:39.566202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.238 [2024-11-15 11:24:39.566221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.238 [2024-11-15 11:24:39.566239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.238 [2024-11-15 11:24:39.566248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.238 [2024-11-15 11:24:39.566266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.238 [2024-11-15 11:24:39.566275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.238 [2024-11-15 11:24:39.566293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.238 [2024-11-15 11:24:39.566303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.238 [2024-11-15 11:24:39.566321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.238 [2024-11-15 11:24:39.566330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:02.238 [2024-11-15 11:24:39.566339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.238 [2024-11-15 11:24:39.566348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.239 [2024-11-15 11:24:39.566357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:02.239 [2024-11-15 11:24:39.566365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.239 [2024-11-15 11:24:39.566374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:02.239 [2024-11-15 11:24:39.566383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:02.239 [2024-11-15 11:24:39.566392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.239 [2024-11-15 11:24:39.566402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:02.239 [2024-11-15 11:24:39.566411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:02.239 [2024-11-15 11:24:39.566421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.239 [2024-11-15 11:24:39.566430] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.239 [2024-11-15 11:24:39.566441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.239 [2024-11-15 11:24:39.566451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.239 [2024-11-15 11:24:39.566464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.239 [2024-11-15 11:24:39.566475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.239 [2024-11-15 11:24:39.566484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.239 [2024-11-15 11:24:39.566493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.239 [2024-11-15 11:24:39.566503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.239 [2024-11-15 11:24:39.566512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.239 [2024-11-15 11:24:39.566521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.239 [2024-11-15 11:24:39.566532] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.239 [2024-11-15 11:24:39.566544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.239 [2024-11-15 11:24:39.566567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:02.239 [2024-11-15 11:24:39.566578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:02.239 [2024-11-15 11:24:39.566589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:02.239 [2024-11-15 11:24:39.566599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:02.239 [2024-11-15 11:24:39.566609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:02.239 [2024-11-15 11:24:39.566620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:02.239 [2024-11-15 11:24:39.566630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:02.239 [2024-11-15 11:24:39.566641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:02.239 [2024-11-15 11:24:39.566651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:02.239 [2024-11-15 11:24:39.566661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:02.239 [2024-11-15 11:24:39.566672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:02.239 [2024-11-15 11:24:39.566682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:02.239 [2024-11-15 11:24:39.566692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:02.239 [2024-11-15 11:24:39.566703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:02.239 [2024-11-15 11:24:39.566713] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.239 [2024-11-15 11:24:39.566724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.239 [2024-11-15 11:24:39.566735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.239 [2024-11-15 11:24:39.566746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.239 [2024-11-15 11:24:39.566756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.239 [2024-11-15 11:24:39.566767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.239 [2024-11-15 11:24:39.566778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.239 [2024-11-15 11:24:39.566789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.239 [2024-11-15 11:24:39.566803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:21:02.239 [2024-11-15 11:24:39.566813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.239 [2024-11-15 11:24:39.609737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.239 [2024-11-15 11:24:39.609782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.239 [2024-11-15 11:24:39.609796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.939 ms 00:21:02.239 [2024-11-15 11:24:39.609807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.239 [2024-11-15 11:24:39.609941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.239 [2024-11-15 11:24:39.609960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.239 [2024-11-15 11:24:39.609971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:02.239 [2024-11-15 11:24:39.609981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.497 [2024-11-15 11:24:39.667367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.497 [2024-11-15 11:24:39.667425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.497 [2024-11-15 11:24:39.667441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.455 ms 00:21:02.497 [2024-11-15 11:24:39.667456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.497 [2024-11-15 11:24:39.667582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.497 [2024-11-15 11:24:39.667596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.497 [2024-11-15 11:24:39.667607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.497 [2024-11-15 11:24:39.667618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.497 [2024-11-15 11:24:39.668053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.497 [2024-11-15 11:24:39.668066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.497 [2024-11-15 11:24:39.668077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:21:02.497 [2024-11-15 11:24:39.668091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.497 [2024-11-15 11:24:39.668212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.497 [2024-11-15 11:24:39.668231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.497 [2024-11-15 11:24:39.668242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:02.497 [2024-11-15 11:24:39.668251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.497 [2024-11-15 11:24:39.687701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.497 [2024-11-15 11:24:39.687868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.497 [2024-11-15 11:24:39.687891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.458 ms 00:21:02.497 [2024-11-15 11:24:39.687902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.497 [2024-11-15 11:24:39.707082] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:02.497 [2024-11-15 11:24:39.707239] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:02.497 [2024-11-15 11:24:39.707346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.497 [2024-11-15 11:24:39.707381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:02.497 [2024-11-15 11:24:39.707413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.349 ms 00:21:02.497 [2024-11-15 11:24:39.707442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.497 [2024-11-15 11:24:39.736939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.497 [2024-11-15 11:24:39.737084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:02.498 [2024-11-15 11:24:39.737216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.446 ms 00:21:02.498 [2024-11-15 11:24:39.737255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.755387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.755515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:02.498 [2024-11-15 11:24:39.755601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.081 ms 00:21:02.498 [2024-11-15 11:24:39.755641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.772906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.773050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:02.498 [2024-11-15 11:24:39.773127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.199 ms 00:21:02.498 [2024-11-15 11:24:39.773162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.774050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.774183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.498 [2024-11-15 11:24:39.774256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:21:02.498 [2024-11-15 11:24:39.774291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.858856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.859072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:02.498 [2024-11-15 11:24:39.859156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.647 ms 00:21:02.498 [2024-11-15 11:24:39.859193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.869991] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:02.498 [2024-11-15 11:24:39.885742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.885912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.498 [2024-11-15 11:24:39.885992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.459 ms 00:21:02.498 [2024-11-15 11:24:39.886036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.886188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.886307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:02.498 [2024-11-15 11:24:39.886390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:02.498 [2024-11-15 11:24:39.886403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.886465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.886477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.498 [2024-11-15 11:24:39.886488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:02.498 [2024-11-15 11:24:39.886498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.886538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.886551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.498 [2024-11-15 11:24:39.886579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:02.498 [2024-11-15 11:24:39.886589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.498 [2024-11-15 11:24:39.886629] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:02.498 [2024-11-15 11:24:39.886641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.498 [2024-11-15 11:24:39.886651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:02.498 [2024-11-15 11:24:39.886662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:02.498 [2024-11-15 11:24:39.886671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.756 [2024-11-15 11:24:39.922349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.756 [2024-11-15 11:24:39.922389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.756 [2024-11-15 11:24:39.922404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.714 ms 00:21:02.756 [2024-11-15 11:24:39.922415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.756 [2024-11-15 11:24:39.922532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.756 [2024-11-15 11:24:39.922547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.756 [2024-11-15 11:24:39.922567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:02.756 [2024-11-15 11:24:39.922578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.756 [2024-11-15 11:24:39.923485] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:02.756 [2024-11-15 11:24:39.927807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.540 ms, result 0 00:21:02.756 [2024-11-15 11:24:39.928462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:02.756 [2024-11-15 11:24:39.946801] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:03.690  [2024-11-15T11:24:42.026Z] Copying: 33/256 [MB] (33 MBps) [2024-11-15T11:24:43.399Z] Copying: 62/256 [MB] (29 MBps) [2024-11-15T11:24:44.335Z] Copying: 90/256 [MB] (27 MBps) [2024-11-15T11:24:45.268Z] Copying: 117/256 [MB] (27 MBps) [2024-11-15T11:24:46.204Z] Copying: 147/256 [MB] (30 MBps) [2024-11-15T11:24:47.142Z] Copying: 178/256 [MB] (31 MBps) [2024-11-15T11:24:48.075Z] Copying: 206/256 [MB] (27 MBps) [2024-11-15T11:24:49.010Z] Copying: 233/256 [MB] (26 MBps) [2024-11-15T11:24:49.270Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-15 11:24:49.173980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:11.869 [2024-11-15 11:24:49.191991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.869 [2024-11-15 11:24:49.192041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.869 [2024-11-15 11:24:49.192057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:11.869 [2024-11-15 11:24:49.192075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.869 [2024-11-15 11:24:49.192101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:11.869 [2024-11-15 11:24:49.196614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.869 [2024-11-15 11:24:49.196647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.869 [2024-11-15 11:24:49.196660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:21:11.869 [2024-11-15 11:24:49.196670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.869 [2024-11-15 11:24:49.196922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.869 [2024-11-15 11:24:49.196935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.869 [2024-11-15 11:24:49.196946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:21:11.869 [2024-11-15 11:24:49.196956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.869 [2024-11-15 11:24:49.200004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.869 [2024-11-15 11:24:49.200033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.869 [2024-11-15 11:24:49.200044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:21:11.869 [2024-11-15 11:24:49.200055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.869 [2024-11-15 11:24:49.205978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.869 [2024-11-15 11:24:49.206014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.869 [2024-11-15 11:24:49.206026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.907 ms 00:21:11.869 [2024-11-15 11:24:49.206036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.869 [2024-11-15 11:24:49.244741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.869 [2024-11-15 11:24:49.244788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.870 [2024-11-15 11:24:49.244803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.687 ms 00:21:11.870 [2024-11-15 11:24:49.244815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.870 [2024-11-15 11:24:49.265840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.870 [2024-11-15 11:24:49.265888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.870 [2024-11-15 11:24:49.265907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.984 ms 00:21:11.870 [2024-11-15 11:24:49.265918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.870 [2024-11-15 11:24:49.266061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.870 [2024-11-15 11:24:49.266075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.870 [2024-11-15 11:24:49.266086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:11.870 [2024-11-15 11:24:49.266097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.129 [2024-11-15 11:24:49.302329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.129 [2024-11-15 11:24:49.302369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:12.129 [2024-11-15 11:24:49.302383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.261 ms 00:21:12.129 [2024-11-15 11:24:49.302393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.129 [2024-11-15 11:24:49.337783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.129 [2024-11-15 11:24:49.337822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:12.129 [2024-11-15 11:24:49.337835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.388 ms 00:21:12.129 [2024-11-15 11:24:49.337845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.129 [2024-11-15 11:24:49.373292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.129 [2024-11-15 11:24:49.373328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:12.129 [2024-11-15 11:24:49.373342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.445 ms 00:21:12.129 [2024-11-15 11:24:49.373368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.129 [2024-11-15 11:24:49.409576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.129 [2024-11-15 11:24:49.409613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:12.129 [2024-11-15 11:24:49.409626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.178 ms 00:21:12.129 [2024-11-15 11:24:49.409636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.129 [2024-11-15 11:24:49.409695] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:12.129 [2024-11-15 11:24:49.409712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:12.129 [2024-11-15 11:24:49.409726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:12.129 [2024-11-15 11:24:49.409738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:12.129 [2024-11-15 11:24:49.409749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:12.129 [2024-11-15 11:24:49.409761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.409990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:12.130 [2024-11-15 11:24:49.410720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:12.131 [2024-11-15 11:24:49.410744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:12.131 [2024-11-15 11:24:49.410756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:12.131 [2024-11-15 11:24:49.410766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:12.131 [2024-11-15 11:24:49.410777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:12.131 [2024-11-15 11:24:49.410787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:12.131 [2024-11-15 11:24:49.410805] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:12.131 [2024-11-15 11:24:49.410815] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b74028cb-3aa2-4783-bfa5-17ab25fa65a1 00:21:12.131 [2024-11-15 11:24:49.410826] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:12.131 [2024-11-15 11:24:49.410836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:12.131 [2024-11-15 11:24:49.410846] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:12.131 [2024-11-15 11:24:49.410856] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:12.131 [2024-11-15 11:24:49.410867] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:12.131 [2024-11-15 11:24:49.410877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:12.131 [2024-11-15 11:24:49.410887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:12.131 [2024-11-15 11:24:49.410897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:12.131 [2024-11-15 11:24:49.410906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:12.131 [2024-11-15 11:24:49.410916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.131 [2024-11-15 11:24:49.410931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:12.131 [2024-11-15 11:24:49.410942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:21:12.131 [2024-11-15 11:24:49.410952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.131 [2024-11-15 11:24:49.431028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.131 [2024-11-15 11:24:49.431063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:12.131 [2024-11-15 11:24:49.431076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.088 ms 00:21:12.131 [2024-11-15 11:24:49.431087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.131 [2024-11-15 11:24:49.431656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.131 [2024-11-15 11:24:49.431679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:12.131 [2024-11-15 11:24:49.431690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:21:12.131 [2024-11-15 11:24:49.431700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.131 [2024-11-15 11:24:49.486894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.131 [2024-11-15 11:24:49.486934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.131 [2024-11-15 11:24:49.486947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.131 [2024-11-15 11:24:49.486957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.131 [2024-11-15 11:24:49.487042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.131 [2024-11-15 11:24:49.487054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.131 [2024-11-15 11:24:49.487065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.131 [2024-11-15 11:24:49.487075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.131 [2024-11-15 11:24:49.487123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.131 [2024-11-15 11:24:49.487137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.131 [2024-11-15 11:24:49.487147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.131 [2024-11-15 11:24:49.487157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.131 [2024-11-15 11:24:49.487177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.131 [2024-11-15 11:24:49.487192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.131 [2024-11-15 11:24:49.487202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.131 [2024-11-15 11:24:49.487212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-15 11:24:49.610852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.390 [2024-11-15 11:24:49.610909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.390 [2024-11-15 11:24:49.610933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.390 [2024-11-15 11:24:49.610944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-15 11:24:49.712619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.390 [2024-11-15 11:24:49.712676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.390 [2024-11-15 11:24:49.712690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.390 [2024-11-15 11:24:49.712702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-15 11:24:49.712800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.390 [2024-11-15 11:24:49.712812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.390 [2024-11-15 11:24:49.712823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.390 [2024-11-15 11:24:49.712834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-15 11:24:49.712864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.390 [2024-11-15 11:24:49.712875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.390 [2024-11-15 11:24:49.712888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.391 [2024-11-15 11:24:49.712898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-15 11:24:49.713013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.391 [2024-11-15 11:24:49.713027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.391 [2024-11-15 11:24:49.713038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.391 [2024-11-15 11:24:49.713049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-15 11:24:49.713091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.391 [2024-11-15 11:24:49.713102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:12.391 [2024-11-15 11:24:49.713113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.391 [2024-11-15 11:24:49.713127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-15 11:24:49.713166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.391 [2024-11-15 11:24:49.713178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.391 [2024-11-15 11:24:49.713188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.391 [2024-11-15 11:24:49.713198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-15 11:24:49.713253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.391 [2024-11-15 11:24:49.713265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.391 [2024-11-15 11:24:49.713279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.391 [2024-11-15 11:24:49.713289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-15 11:24:49.713448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.299 ms, result 0 00:21:13.766 00:21:13.766 00:21:13.766 11:24:50 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:14.025 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:14.025 11:24:51 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:14.025 11:24:51 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:14.025 11:24:51 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:14.025 11:24:51 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:14.025 11:24:51 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:14.025 11:24:51 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:14.025 Process with pid 76025 is not found 00:21:14.025 11:24:51 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76025 00:21:14.025 11:24:51 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76025 ']' 00:21:14.025 11:24:51 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76025 00:21:14.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76025) - No such process 00:21:14.025 11:24:51 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 76025 is not found' 00:21:14.025 ************************************ 00:21:14.025 END TEST ftl_trim 00:21:14.025 ************************************ 00:21:14.025 00:21:14.025 real 1m8.235s 00:21:14.025 user 1m33.323s 00:21:14.025 sys 0m6.845s 00:21:14.025 11:24:51 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:14.025 11:24:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:14.286 11:24:51 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:14.286 11:24:51 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:14.286 11:24:51 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:14.286 11:24:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:14.286 ************************************ 00:21:14.286 START TEST ftl_restore 00:21:14.286 ************************************ 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:14.286 * Looking for test storage... 00:21:14.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.286 11:24:51 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.286 --rc genhtml_branch_coverage=1 00:21:14.286 --rc genhtml_function_coverage=1 00:21:14.286 --rc genhtml_legend=1 00:21:14.286 --rc geninfo_all_blocks=1 00:21:14.286 --rc geninfo_unexecuted_blocks=1 00:21:14.286 00:21:14.286 ' 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.286 --rc genhtml_branch_coverage=1 00:21:14.286 --rc genhtml_function_coverage=1 00:21:14.286 --rc genhtml_legend=1 00:21:14.286 --rc geninfo_all_blocks=1 00:21:14.286 --rc geninfo_unexecuted_blocks=1 00:21:14.286 00:21:14.286 ' 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.286 --rc genhtml_branch_coverage=1 00:21:14.286 --rc genhtml_function_coverage=1 00:21:14.286 --rc genhtml_legend=1 00:21:14.286 --rc geninfo_all_blocks=1 00:21:14.286 --rc geninfo_unexecuted_blocks=1 00:21:14.286 00:21:14.286 ' 00:21:14.286 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.286 --rc genhtml_branch_coverage=1 00:21:14.286 --rc genhtml_function_coverage=1 00:21:14.286 --rc genhtml_legend=1 00:21:14.286 --rc geninfo_all_blocks=1 00:21:14.286 --rc geninfo_unexecuted_blocks=1 00:21:14.286 00:21:14.286 ' 00:21:14.286 11:24:51 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:14.286 11:24:51 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:14.286 11:24:51 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.umVbPJvkEd 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76289 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76289 00:21:14.546 11:24:51 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.546 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 76289 ']' 00:21:14.546 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.546 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:14.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.546 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.546 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:14.546 11:24:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:14.546 [2024-11-15 11:24:51.820921] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:21:14.546 [2024-11-15 11:24:51.821051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76289 ] 00:21:14.805 [2024-11-15 11:24:52.002406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.805 [2024-11-15 11:24:52.115550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.742 11:24:52 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:15.742 11:24:52 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:21:15.742 11:24:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:15.742 11:24:52 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:15.742 11:24:52 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:15.742 11:24:52 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:15.742 11:24:52 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:15.742 11:24:52 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:16.000 11:24:53 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:16.000 11:24:53 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:16.000 11:24:53 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:16.000 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:21:16.000 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:16.000 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:16.000 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:16.000 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:16.258 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:16.258 { 00:21:16.258 "name": "nvme0n1", 00:21:16.258 "aliases": [ 00:21:16.258 "97d44e91-25ee-46f6-a981-aad839020c23" 00:21:16.258 ], 00:21:16.258 "product_name": "NVMe disk", 00:21:16.258 "block_size": 4096, 00:21:16.258 "num_blocks": 1310720, 00:21:16.258 "uuid": "97d44e91-25ee-46f6-a981-aad839020c23", 00:21:16.258 "numa_id": -1, 00:21:16.258 "assigned_rate_limits": { 00:21:16.258 "rw_ios_per_sec": 0, 00:21:16.258 "rw_mbytes_per_sec": 0, 00:21:16.258 "r_mbytes_per_sec": 0, 00:21:16.258 "w_mbytes_per_sec": 0 00:21:16.258 }, 00:21:16.258 "claimed": true, 00:21:16.258 "claim_type": "read_many_write_one", 00:21:16.258 "zoned": false, 00:21:16.258 "supported_io_types": { 00:21:16.258 "read": true, 00:21:16.258 "write": true, 00:21:16.258 "unmap": true, 00:21:16.258 "flush": true, 00:21:16.258 "reset": true, 00:21:16.258 "nvme_admin": true, 00:21:16.258 "nvme_io": true, 00:21:16.259 "nvme_io_md": false, 00:21:16.259 "write_zeroes": true, 00:21:16.259 "zcopy": false, 00:21:16.259 "get_zone_info": false, 00:21:16.259 "zone_management": false, 00:21:16.259 "zone_append": false, 00:21:16.259 "compare": true, 00:21:16.259 "compare_and_write": false, 00:21:16.259 "abort": true, 00:21:16.259 "seek_hole": false, 00:21:16.259 "seek_data": false, 00:21:16.259 "copy": true, 00:21:16.259 "nvme_iov_md": false 00:21:16.259 }, 00:21:16.259 "driver_specific": { 00:21:16.259 "nvme": [ 00:21:16.259 { 00:21:16.259 "pci_address": "0000:00:11.0", 00:21:16.259 "trid": { 00:21:16.259 "trtype": "PCIe", 00:21:16.259 "traddr": "0000:00:11.0" 00:21:16.259 }, 00:21:16.259 "ctrlr_data": { 00:21:16.259 "cntlid": 0, 00:21:16.259 "vendor_id": "0x1b36", 00:21:16.259 "model_number": "QEMU NVMe Ctrl", 00:21:16.259 "serial_number": "12341", 00:21:16.259 "firmware_revision": "8.0.0", 00:21:16.259 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:16.259 "oacs": { 00:21:16.259 "security": 0, 00:21:16.259 "format": 1, 00:21:16.259 "firmware": 0, 00:21:16.259 "ns_manage": 1 00:21:16.259 }, 00:21:16.259 "multi_ctrlr": false, 00:21:16.259 "ana_reporting": false 00:21:16.259 }, 00:21:16.259 "vs": { 00:21:16.259 "nvme_version": "1.4" 00:21:16.259 }, 00:21:16.259 "ns_data": { 00:21:16.259 "id": 1, 00:21:16.259 "can_share": false 00:21:16.259 } 00:21:16.259 } 00:21:16.259 ], 00:21:16.259 "mp_policy": "active_passive" 00:21:16.259 } 00:21:16.259 } 00:21:16.259 ]' 00:21:16.259 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:16.259 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:16.259 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:16.259 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:21:16.259 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:21:16.259 11:24:53 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:21:16.259 11:24:53 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:16.259 11:24:53 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:16.259 11:24:53 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:16.259 11:24:53 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:16.259 11:24:53 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:16.518 11:24:53 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=298bb846-5292-4a6f-a3da-913e44be28a5 00:21:16.518 11:24:53 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:16.518 11:24:53 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 298bb846-5292-4a6f-a3da-913e44be28a5 00:21:16.777 11:24:54 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:17.035 11:24:54 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=9b5fecf5-e325-4985-b657-685b121fce69 00:21:17.035 11:24:54 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9b5fecf5-e325-4985-b657-685b121fce69 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:17.294 11:24:54 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.294 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.294 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:17.294 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:17.294 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:17.294 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.294 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:17.294 { 00:21:17.294 "name": "130f1328-2f91-4dc7-a01a-5f0d10bbc81a", 00:21:17.294 "aliases": [ 00:21:17.294 "lvs/nvme0n1p0" 00:21:17.294 ], 00:21:17.294 "product_name": "Logical Volume", 00:21:17.294 "block_size": 4096, 00:21:17.294 "num_blocks": 26476544, 00:21:17.294 "uuid": "130f1328-2f91-4dc7-a01a-5f0d10bbc81a", 00:21:17.294 "assigned_rate_limits": { 00:21:17.294 "rw_ios_per_sec": 0, 00:21:17.294 "rw_mbytes_per_sec": 0, 00:21:17.294 "r_mbytes_per_sec": 0, 00:21:17.294 "w_mbytes_per_sec": 0 00:21:17.294 }, 00:21:17.294 "claimed": false, 00:21:17.294 "zoned": false, 00:21:17.294 "supported_io_types": { 00:21:17.294 "read": true, 00:21:17.294 "write": true, 00:21:17.294 "unmap": true, 00:21:17.294 "flush": false, 00:21:17.294 "reset": true, 00:21:17.294 "nvme_admin": false, 00:21:17.294 "nvme_io": false, 00:21:17.294 "nvme_io_md": false, 00:21:17.294 "write_zeroes": true, 00:21:17.294 "zcopy": false, 00:21:17.294 "get_zone_info": false, 00:21:17.294 "zone_management": false, 00:21:17.294 "zone_append": false, 00:21:17.294 "compare": false, 00:21:17.294 "compare_and_write": false, 00:21:17.294 "abort": false, 00:21:17.294 "seek_hole": true, 00:21:17.294 "seek_data": true, 00:21:17.294 "copy": false, 00:21:17.294 "nvme_iov_md": false 00:21:17.294 }, 00:21:17.294 "driver_specific": { 00:21:17.294 "lvol": { 00:21:17.294 "lvol_store_uuid": "9b5fecf5-e325-4985-b657-685b121fce69", 00:21:17.294 "base_bdev": "nvme0n1", 00:21:17.294 "thin_provision": true, 00:21:17.294 "num_allocated_clusters": 0, 00:21:17.294 "snapshot": false, 00:21:17.294 "clone": false, 00:21:17.294 "esnap_clone": false 00:21:17.294 } 00:21:17.294 } 00:21:17.294 } 00:21:17.294 ]' 00:21:17.294 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:17.554 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:17.554 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:17.554 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:17.554 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:17.554 11:24:54 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:21:17.554 11:24:54 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:17.554 11:24:54 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:17.554 11:24:54 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:17.814 11:24:55 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:17.814 11:24:55 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:17.814 11:24:55 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.814 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:17.814 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:17.814 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:17.814 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:17.814 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:18.072 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:18.072 { 00:21:18.072 "name": "130f1328-2f91-4dc7-a01a-5f0d10bbc81a", 00:21:18.072 "aliases": [ 00:21:18.072 "lvs/nvme0n1p0" 00:21:18.072 ], 00:21:18.072 "product_name": "Logical Volume", 00:21:18.073 "block_size": 4096, 00:21:18.073 "num_blocks": 26476544, 00:21:18.073 "uuid": "130f1328-2f91-4dc7-a01a-5f0d10bbc81a", 00:21:18.073 "assigned_rate_limits": { 00:21:18.073 "rw_ios_per_sec": 0, 00:21:18.073 "rw_mbytes_per_sec": 0, 00:21:18.073 "r_mbytes_per_sec": 0, 00:21:18.073 "w_mbytes_per_sec": 0 00:21:18.073 }, 00:21:18.073 "claimed": false, 00:21:18.073 "zoned": false, 00:21:18.073 "supported_io_types": { 00:21:18.073 "read": true, 00:21:18.073 "write": true, 00:21:18.073 "unmap": true, 00:21:18.073 "flush": false, 00:21:18.073 "reset": true, 00:21:18.073 "nvme_admin": false, 00:21:18.073 "nvme_io": false, 00:21:18.073 "nvme_io_md": false, 00:21:18.073 "write_zeroes": true, 00:21:18.073 "zcopy": false, 00:21:18.073 "get_zone_info": false, 00:21:18.073 "zone_management": false, 00:21:18.073 "zone_append": false, 00:21:18.073 "compare": false, 00:21:18.073 "compare_and_write": false, 00:21:18.073 "abort": false, 00:21:18.073 "seek_hole": true, 00:21:18.073 "seek_data": true, 00:21:18.073 "copy": false, 00:21:18.073 "nvme_iov_md": false 00:21:18.073 }, 00:21:18.073 "driver_specific": { 00:21:18.073 "lvol": { 00:21:18.073 "lvol_store_uuid": "9b5fecf5-e325-4985-b657-685b121fce69", 00:21:18.073 "base_bdev": "nvme0n1", 00:21:18.073 "thin_provision": true, 00:21:18.073 "num_allocated_clusters": 0, 00:21:18.073 "snapshot": false, 00:21:18.073 "clone": false, 00:21:18.073 "esnap_clone": false 00:21:18.073 } 00:21:18.073 } 00:21:18.073 } 00:21:18.073 ]' 00:21:18.073 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:18.073 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:18.073 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:18.073 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:18.073 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:18.073 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:21:18.073 11:24:55 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:18.073 11:24:55 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:18.332 11:24:55 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:18.332 11:24:55 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:18.332 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:18.332 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:18.332 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:18.332 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:18.332 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 130f1328-2f91-4dc7-a01a-5f0d10bbc81a 00:21:18.591 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:18.591 { 00:21:18.591 "name": "130f1328-2f91-4dc7-a01a-5f0d10bbc81a", 00:21:18.591 "aliases": [ 00:21:18.591 "lvs/nvme0n1p0" 00:21:18.591 ], 00:21:18.591 "product_name": "Logical Volume", 00:21:18.591 "block_size": 4096, 00:21:18.591 "num_blocks": 26476544, 00:21:18.591 "uuid": "130f1328-2f91-4dc7-a01a-5f0d10bbc81a", 00:21:18.591 "assigned_rate_limits": { 00:21:18.591 "rw_ios_per_sec": 0, 00:21:18.591 "rw_mbytes_per_sec": 0, 00:21:18.591 "r_mbytes_per_sec": 0, 00:21:18.591 "w_mbytes_per_sec": 0 00:21:18.591 }, 00:21:18.591 "claimed": false, 00:21:18.591 "zoned": false, 00:21:18.591 "supported_io_types": { 00:21:18.591 "read": true, 00:21:18.591 "write": true, 00:21:18.591 "unmap": true, 00:21:18.591 "flush": false, 00:21:18.591 "reset": true, 00:21:18.591 "nvme_admin": false, 00:21:18.591 "nvme_io": false, 00:21:18.591 "nvme_io_md": false, 00:21:18.591 "write_zeroes": true, 00:21:18.591 "zcopy": false, 00:21:18.591 "get_zone_info": false, 00:21:18.591 "zone_management": false, 00:21:18.591 "zone_append": false, 00:21:18.591 "compare": false, 00:21:18.591 "compare_and_write": false, 00:21:18.591 "abort": false, 00:21:18.591 "seek_hole": true, 00:21:18.591 "seek_data": true, 00:21:18.591 "copy": false, 00:21:18.591 "nvme_iov_md": false 00:21:18.591 }, 00:21:18.591 "driver_specific": { 00:21:18.591 "lvol": { 00:21:18.591 "lvol_store_uuid": "9b5fecf5-e325-4985-b657-685b121fce69", 00:21:18.591 "base_bdev": "nvme0n1", 00:21:18.591 "thin_provision": true, 00:21:18.591 "num_allocated_clusters": 0, 00:21:18.591 "snapshot": false, 00:21:18.591 "clone": false, 00:21:18.591 "esnap_clone": false 00:21:18.591 } 00:21:18.591 } 00:21:18.591 } 00:21:18.591 ]' 00:21:18.591 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:18.591 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:18.591 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:18.591 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:18.591 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:18.591 11:24:55 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:21:18.591 11:24:55 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:18.591 11:24:55 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 130f1328-2f91-4dc7-a01a-5f0d10bbc81a --l2p_dram_limit 10' 00:21:18.591 11:24:55 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:18.591 11:24:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:18.591 11:24:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:18.591 11:24:55 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:18.591 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:18.591 11:24:55 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 130f1328-2f91-4dc7-a01a-5f0d10bbc81a --l2p_dram_limit 10 -c nvc0n1p0 00:21:18.851 [2024-11-15 11:24:56.013879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.013936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:18.851 [2024-11-15 11:24:56.013956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:18.851 [2024-11-15 11:24:56.013968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.014047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.014060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.851 [2024-11-15 11:24:56.014074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:18.851 [2024-11-15 11:24:56.014085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.014109] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:18.851 [2024-11-15 11:24:56.015147] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:18.851 [2024-11-15 11:24:56.015184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.015195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.851 [2024-11-15 11:24:56.015212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:21:18.851 [2024-11-15 11:24:56.015222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.015303] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e66deb06-c008-4aa6-8b67-bc55c34f40dd 00:21:18.851 [2024-11-15 11:24:56.016754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.016793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:18.851 [2024-11-15 11:24:56.016805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:18.851 [2024-11-15 11:24:56.016818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.024339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.024374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.851 [2024-11-15 11:24:56.024386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.479 ms 00:21:18.851 [2024-11-15 11:24:56.024399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.024499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.024517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.851 [2024-11-15 11:24:56.024528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:18.851 [2024-11-15 11:24:56.024545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.024626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.024643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:18.851 [2024-11-15 11:24:56.024654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:18.851 [2024-11-15 11:24:56.024670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.024695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:18.851 [2024-11-15 11:24:56.030085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.030120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.851 [2024-11-15 11:24:56.030137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.404 ms 00:21:18.851 [2024-11-15 11:24:56.030147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.030194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.851 [2024-11-15 11:24:56.030205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:18.851 [2024-11-15 11:24:56.030217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:18.851 [2024-11-15 11:24:56.030228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.851 [2024-11-15 11:24:56.030275] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:18.851 [2024-11-15 11:24:56.030401] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:18.851 [2024-11-15 11:24:56.030421] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:18.851 [2024-11-15 11:24:56.030435] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:18.851 [2024-11-15 11:24:56.030451] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:18.852 [2024-11-15 11:24:56.030463] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:18.852 [2024-11-15 11:24:56.030477] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:18.852 [2024-11-15 11:24:56.030487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:18.852 [2024-11-15 11:24:56.030502] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:18.852 [2024-11-15 11:24:56.030512] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:18.852 [2024-11-15 11:24:56.030525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.852 [2024-11-15 11:24:56.030535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:18.852 [2024-11-15 11:24:56.030549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:21:18.852 [2024-11-15 11:24:56.030581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.852 [2024-11-15 11:24:56.030660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.852 [2024-11-15 11:24:56.030671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:18.852 [2024-11-15 11:24:56.030685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:18.852 [2024-11-15 11:24:56.030694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.852 [2024-11-15 11:24:56.030788] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:18.852 [2024-11-15 11:24:56.030800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:18.852 [2024-11-15 11:24:56.030813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.852 [2024-11-15 11:24:56.030823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.852 [2024-11-15 11:24:56.030844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:18.852 [2024-11-15 11:24:56.030854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:18.852 [2024-11-15 11:24:56.030866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:18.852 [2024-11-15 11:24:56.030875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:18.852 [2024-11-15 11:24:56.030887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:18.852 [2024-11-15 11:24:56.030897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.852 [2024-11-15 11:24:56.030908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:18.852 [2024-11-15 11:24:56.030918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:18.852 [2024-11-15 11:24:56.030929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.852 [2024-11-15 11:24:56.030940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:18.852 [2024-11-15 11:24:56.030952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:18.852 [2024-11-15 11:24:56.030962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.852 [2024-11-15 11:24:56.030975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:18.852 [2024-11-15 11:24:56.030984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:18.852 [2024-11-15 11:24:56.030997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:18.852 [2024-11-15 11:24:56.031018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.852 [2024-11-15 11:24:56.031039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:18.852 [2024-11-15 11:24:56.031049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.852 [2024-11-15 11:24:56.031071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:18.852 [2024-11-15 11:24:56.031082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.852 [2024-11-15 11:24:56.031102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:18.852 [2024-11-15 11:24:56.031111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.852 [2024-11-15 11:24:56.031132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:18.852 [2024-11-15 11:24:56.031146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.852 [2024-11-15 11:24:56.031167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:18.852 [2024-11-15 11:24:56.031177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:18.852 [2024-11-15 11:24:56.031189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.852 [2024-11-15 11:24:56.031198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:18.852 [2024-11-15 11:24:56.031209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:18.852 [2024-11-15 11:24:56.031219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:18.852 [2024-11-15 11:24:56.031239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:18.852 [2024-11-15 11:24:56.031250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:18.852 [2024-11-15 11:24:56.031271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:18.852 [2024-11-15 11:24:56.031282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.852 [2024-11-15 11:24:56.031296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.852 [2024-11-15 11:24:56.031306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:18.852 [2024-11-15 11:24:56.031320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:18.852 [2024-11-15 11:24:56.031329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:18.852 [2024-11-15 11:24:56.031341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:18.852 [2024-11-15 11:24:56.031350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:18.852 [2024-11-15 11:24:56.031362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:18.852 [2024-11-15 11:24:56.031376] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:18.852 [2024-11-15 11:24:56.031391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.852 [2024-11-15 11:24:56.031406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:18.852 [2024-11-15 11:24:56.031419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:18.852 [2024-11-15 11:24:56.031429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:18.852 [2024-11-15 11:24:56.031442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:18.852 [2024-11-15 11:24:56.031452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:18.852 [2024-11-15 11:24:56.031465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:18.852 [2024-11-15 11:24:56.031476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:18.852 [2024-11-15 11:24:56.031489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:18.852 [2024-11-15 11:24:56.031499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:18.852 [2024-11-15 11:24:56.031514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:18.852 [2024-11-15 11:24:56.031525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:18.852 [2024-11-15 11:24:56.031537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:18.852 [2024-11-15 11:24:56.031547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:18.852 [2024-11-15 11:24:56.031572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:18.852 [2024-11-15 11:24:56.031583] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:18.852 [2024-11-15 11:24:56.031597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.852 [2024-11-15 11:24:56.031608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:18.852 [2024-11-15 11:24:56.031621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:18.852 [2024-11-15 11:24:56.031631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:18.852 [2024-11-15 11:24:56.031644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:18.852 [2024-11-15 11:24:56.031655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.852 [2024-11-15 11:24:56.031668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:18.852 [2024-11-15 11:24:56.031679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:21:18.852 [2024-11-15 11:24:56.031691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.852 [2024-11-15 11:24:56.031733] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:18.852 [2024-11-15 11:24:56.031752] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:22.137 [2024-11-15 11:24:58.804150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.804239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:22.137 [2024-11-15 11:24:58.804274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2776.914 ms 00:21:22.137 [2024-11-15 11:24:58.804288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.843327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.843386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.137 [2024-11-15 11:24:58.843403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.703 ms 00:21:22.137 [2024-11-15 11:24:58.843417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.843567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.843585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:22.137 [2024-11-15 11:24:58.843597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:22.137 [2024-11-15 11:24:58.843617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.891307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.891356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.137 [2024-11-15 11:24:58.891370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.724 ms 00:21:22.137 [2024-11-15 11:24:58.891385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.891426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.891443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.137 [2024-11-15 11:24:58.891454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:22.137 [2024-11-15 11:24:58.891467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.891975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.892004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.137 [2024-11-15 11:24:58.892015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:21:22.137 [2024-11-15 11:24:58.892028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.892129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.892144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.137 [2024-11-15 11:24:58.892158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:22.137 [2024-11-15 11:24:58.892173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.912962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.913012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.137 [2024-11-15 11:24:58.913026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.802 ms 00:21:22.137 [2024-11-15 11:24:58.913040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:58.936459] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:22.137 [2024-11-15 11:24:58.939923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:58.939953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:22.137 [2024-11-15 11:24:58.939969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.825 ms 00:21:22.137 [2024-11-15 11:24:58.939981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.030955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.031023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:22.137 [2024-11-15 11:24:59.031045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.078 ms 00:21:22.137 [2024-11-15 11:24:59.031056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.031254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.031271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:22.137 [2024-11-15 11:24:59.031288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:22.137 [2024-11-15 11:24:59.031299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.068016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.068058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:22.137 [2024-11-15 11:24:59.068075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.716 ms 00:21:22.137 [2024-11-15 11:24:59.068092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.103223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.103263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:22.137 [2024-11-15 11:24:59.103281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.136 ms 00:21:22.137 [2024-11-15 11:24:59.103291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.104047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.104076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:22.137 [2024-11-15 11:24:59.104092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:21:22.137 [2024-11-15 11:24:59.104105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.208717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.208779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:22.137 [2024-11-15 11:24:59.208804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.698 ms 00:21:22.137 [2024-11-15 11:24:59.208815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.249024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.249091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:22.137 [2024-11-15 11:24:59.249111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.137 ms 00:21:22.137 [2024-11-15 11:24:59.249122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.287952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.288013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:22.137 [2024-11-15 11:24:59.288032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.827 ms 00:21:22.137 [2024-11-15 11:24:59.288043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.325533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.325611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:22.137 [2024-11-15 11:24:59.325632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.493 ms 00:21:22.137 [2024-11-15 11:24:59.325643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.325698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.325711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:22.137 [2024-11-15 11:24:59.325729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:22.137 [2024-11-15 11:24:59.325740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-15 11:24:59.325853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-15 11:24:59.325867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:22.137 [2024-11-15 11:24:59.325884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:22.137 [2024-11-15 11:24:59.325894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.138 [2024-11-15 11:24:59.326992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3317.957 ms, result 0 00:21:22.138 { 00:21:22.138 "name": "ftl0", 00:21:22.138 "uuid": "e66deb06-c008-4aa6-8b67-bc55c34f40dd" 00:21:22.138 } 00:21:22.138 11:24:59 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:22.138 11:24:59 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:22.396 11:24:59 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:22.396 11:24:59 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:22.396 [2024-11-15 11:24:59.765696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-15 11:24:59.765752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.396 [2024-11-15 11:24:59.765769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.396 [2024-11-15 11:24:59.765792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-15 11:24:59.765819] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.396 [2024-11-15 11:24:59.770092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-15 11:24:59.770125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.396 [2024-11-15 11:24:59.770142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.254 ms 00:21:22.396 [2024-11-15 11:24:59.770160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-15 11:24:59.770415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-15 11:24:59.770432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.396 [2024-11-15 11:24:59.770446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:21:22.396 [2024-11-15 11:24:59.770456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-15 11:24:59.772989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-15 11:24:59.773013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.396 [2024-11-15 11:24:59.773029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:21:22.396 [2024-11-15 11:24:59.773039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.396 [2024-11-15 11:24:59.778066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.396 [2024-11-15 11:24:59.778100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.396 [2024-11-15 11:24:59.778119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:21:22.396 [2024-11-15 11:24:59.778130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.815299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.655 [2024-11-15 11:24:59.815349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.655 [2024-11-15 11:24:59.815383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.147 ms 00:21:22.655 [2024-11-15 11:24:59.815394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.837886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.655 [2024-11-15 11:24:59.837923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.655 [2024-11-15 11:24:59.837956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.476 ms 00:21:22.655 [2024-11-15 11:24:59.837967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.838128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.655 [2024-11-15 11:24:59.838143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.655 [2024-11-15 11:24:59.838164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:22.655 [2024-11-15 11:24:59.838175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.875170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.655 [2024-11-15 11:24:59.875206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.655 [2024-11-15 11:24:59.875223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.025 ms 00:21:22.655 [2024-11-15 11:24:59.875234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.911343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.655 [2024-11-15 11:24:59.911382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.655 [2024-11-15 11:24:59.911399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.117 ms 00:21:22.655 [2024-11-15 11:24:59.911410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.947034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.655 [2024-11-15 11:24:59.947085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.655 [2024-11-15 11:24:59.947103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.632 ms 00:21:22.655 [2024-11-15 11:24:59.947112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.982922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.655 [2024-11-15 11:24:59.982959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.655 [2024-11-15 11:24:59.982976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.765 ms 00:21:22.655 [2024-11-15 11:24:59.982986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.655 [2024-11-15 11:24:59.983030] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.655 [2024-11-15 11:24:59.983046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:22.655 [2024-11-15 11:24:59.983061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.983993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.656 [2024-11-15 11:24:59.984176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.657 [2024-11-15 11:24:59.984290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.657 [2024-11-15 11:24:59.984305] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e66deb06-c008-4aa6-8b67-bc55c34f40dd 00:21:22.657 [2024-11-15 11:24:59.984317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:22.657 [2024-11-15 11:24:59.984332] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:22.657 [2024-11-15 11:24:59.984342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:22.657 [2024-11-15 11:24:59.984358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:22.657 [2024-11-15 11:24:59.984368] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.657 [2024-11-15 11:24:59.984381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.657 [2024-11-15 11:24:59.984391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.657 [2024-11-15 11:24:59.984403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.657 [2024-11-15 11:24:59.984412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.657 [2024-11-15 11:24:59.984424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.657 [2024-11-15 11:24:59.984434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.657 [2024-11-15 11:24:59.984447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:21:22.657 [2024-11-15 11:24:59.984457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.657 [2024-11-15 11:25:00.004698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.657 [2024-11-15 11:25:00.004732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.657 [2024-11-15 11:25:00.004748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.213 ms 00:21:22.657 [2024-11-15 11:25:00.004758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.657 [2024-11-15 11:25:00.005400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.657 [2024-11-15 11:25:00.005422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.657 [2024-11-15 11:25:00.005439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:21:22.657 [2024-11-15 11:25:00.005449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.072148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.072187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.916 [2024-11-15 11:25:00.072219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.072230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.072293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.072305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.916 [2024-11-15 11:25:00.072322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.072332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.072438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.072453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.916 [2024-11-15 11:25:00.072466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.072476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.072501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.072511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.916 [2024-11-15 11:25:00.072524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.072534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.197836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.197883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.916 [2024-11-15 11:25:00.197900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.197911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.299399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.299453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.916 [2024-11-15 11:25:00.299471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.299485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.299627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.299642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.916 [2024-11-15 11:25:00.299656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.299666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.299732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.299744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.916 [2024-11-15 11:25:00.299757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.299768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.299881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.299894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.916 [2024-11-15 11:25:00.299907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.299918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.299963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.299976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.916 [2024-11-15 11:25:00.299988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.299999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.300043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.300054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.916 [2024-11-15 11:25:00.300066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.300076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.300129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.916 [2024-11-15 11:25:00.300141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.916 [2024-11-15 11:25:00.300154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.916 [2024-11-15 11:25:00.300164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.916 [2024-11-15 11:25:00.300300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.435 ms, result 0 00:21:22.916 true 00:21:23.197 11:25:00 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76289 00:21:23.197 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76289 ']' 00:21:23.197 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76289 00:21:23.197 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:21:23.197 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:23.197 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76289 00:21:23.198 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:23.198 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:23.198 killing process with pid 76289 00:21:23.198 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76289' 00:21:23.198 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 76289 00:21:23.198 11:25:00 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 76289 00:21:28.489 11:25:05 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:32.722 262144+0 records in 00:21:32.722 262144+0 records out 00:21:32.722 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.30192 s, 250 MB/s 00:21:32.722 11:25:09 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:34.624 11:25:11 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:34.624 [2024-11-15 11:25:11.707209] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:21:34.624 [2024-11-15 11:25:11.707750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76531 ] 00:21:34.624 [2024-11-15 11:25:11.895246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.624 [2024-11-15 11:25:12.017171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.192 [2024-11-15 11:25:12.399283] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.192 [2024-11-15 11:25:12.399360] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.192 [2024-11-15 11:25:12.568689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.192 [2024-11-15 11:25:12.568737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.192 [2024-11-15 11:25:12.568762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:35.192 [2024-11-15 11:25:12.568772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.192 [2024-11-15 11:25:12.568828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.192 [2024-11-15 11:25:12.568840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.192 [2024-11-15 11:25:12.568857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:35.192 [2024-11-15 11:25:12.568869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.192 [2024-11-15 11:25:12.568891] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.192 [2024-11-15 11:25:12.569896] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.192 [2024-11-15 11:25:12.569926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.192 [2024-11-15 11:25:12.569938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.192 [2024-11-15 11:25:12.569949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:21:35.192 [2024-11-15 11:25:12.569959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.192 [2024-11-15 11:25:12.571650] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.192 [2024-11-15 11:25:12.591157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.192 [2024-11-15 11:25:12.591203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.193 [2024-11-15 11:25:12.591220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.539 ms 00:21:35.193 [2024-11-15 11:25:12.591245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.193 [2024-11-15 11:25:12.591320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.193 [2024-11-15 11:25:12.591335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.193 [2024-11-15 11:25:12.591345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:35.193 [2024-11-15 11:25:12.591355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.598551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.453 [2024-11-15 11:25:12.598597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.453 [2024-11-15 11:25:12.598609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.129 ms 00:21:35.453 [2024-11-15 11:25:12.598625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.598708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.453 [2024-11-15 11:25:12.598723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.453 [2024-11-15 11:25:12.598735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:35.453 [2024-11-15 11:25:12.598746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.598792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.453 [2024-11-15 11:25:12.598804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.453 [2024-11-15 11:25:12.598815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:35.453 [2024-11-15 11:25:12.598825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.598856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.453 [2024-11-15 11:25:12.603744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.453 [2024-11-15 11:25:12.603777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.453 [2024-11-15 11:25:12.603789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.907 ms 00:21:35.453 [2024-11-15 11:25:12.603803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.603837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.453 [2024-11-15 11:25:12.603847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.453 [2024-11-15 11:25:12.603858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:35.453 [2024-11-15 11:25:12.603868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.603924] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.453 [2024-11-15 11:25:12.603948] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.453 [2024-11-15 11:25:12.603984] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.453 [2024-11-15 11:25:12.604005] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:35.453 [2024-11-15 11:25:12.604096] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.453 [2024-11-15 11:25:12.604110] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.453 [2024-11-15 11:25:12.604122] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:35.453 [2024-11-15 11:25:12.604135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604146] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604158] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.453 [2024-11-15 11:25:12.604168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.453 [2024-11-15 11:25:12.604178] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.453 [2024-11-15 11:25:12.604191] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.453 [2024-11-15 11:25:12.604201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.453 [2024-11-15 11:25:12.604212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.453 [2024-11-15 11:25:12.604223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:21:35.453 [2024-11-15 11:25:12.604233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.604304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.453 [2024-11-15 11:25:12.604314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.453 [2024-11-15 11:25:12.604325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:35.453 [2024-11-15 11:25:12.604334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.453 [2024-11-15 11:25:12.604434] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.453 [2024-11-15 11:25:12.604454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.453 [2024-11-15 11:25:12.604465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.453 [2024-11-15 11:25:12.604496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.453 [2024-11-15 11:25:12.604524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.453 [2024-11-15 11:25:12.604543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.453 [2024-11-15 11:25:12.604553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.453 [2024-11-15 11:25:12.604574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.453 [2024-11-15 11:25:12.604583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.453 [2024-11-15 11:25:12.604593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.453 [2024-11-15 11:25:12.604612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.453 [2024-11-15 11:25:12.604630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.453 [2024-11-15 11:25:12.604658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.453 [2024-11-15 11:25:12.604686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.453 [2024-11-15 11:25:12.604714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.453 [2024-11-15 11:25:12.604722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.453 [2024-11-15 11:25:12.604731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.453 [2024-11-15 11:25:12.604740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.454 [2024-11-15 11:25:12.604749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.454 [2024-11-15 11:25:12.604758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.454 [2024-11-15 11:25:12.604767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.454 [2024-11-15 11:25:12.604776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.454 [2024-11-15 11:25:12.604784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.454 [2024-11-15 11:25:12.604793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.454 [2024-11-15 11:25:12.604802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.454 [2024-11-15 11:25:12.604811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.454 [2024-11-15 11:25:12.604820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.454 [2024-11-15 11:25:12.604830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.454 [2024-11-15 11:25:12.604838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.454 [2024-11-15 11:25:12.604848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.454 [2024-11-15 11:25:12.604858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.454 [2024-11-15 11:25:12.604867] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.454 [2024-11-15 11:25:12.604877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.454 [2024-11-15 11:25:12.604887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.454 [2024-11-15 11:25:12.604896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.454 [2024-11-15 11:25:12.604906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.454 [2024-11-15 11:25:12.604916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.454 [2024-11-15 11:25:12.604925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.454 [2024-11-15 11:25:12.604934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.454 [2024-11-15 11:25:12.604943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.454 [2024-11-15 11:25:12.604952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.454 [2024-11-15 11:25:12.604963] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.454 [2024-11-15 11:25:12.604975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.454 [2024-11-15 11:25:12.604986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.454 [2024-11-15 11:25:12.604997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.454 [2024-11-15 11:25:12.605008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.454 [2024-11-15 11:25:12.605018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.454 [2024-11-15 11:25:12.605029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.454 [2024-11-15 11:25:12.605040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.454 [2024-11-15 11:25:12.605050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.454 [2024-11-15 11:25:12.605060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.454 [2024-11-15 11:25:12.605070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.454 [2024-11-15 11:25:12.605080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.454 [2024-11-15 11:25:12.605090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.454 [2024-11-15 11:25:12.605100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.454 [2024-11-15 11:25:12.605111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.454 [2024-11-15 11:25:12.605121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.454 [2024-11-15 11:25:12.605131] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.454 [2024-11-15 11:25:12.605145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.454 [2024-11-15 11:25:12.605156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.454 [2024-11-15 11:25:12.605167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.454 [2024-11-15 11:25:12.605177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.454 [2024-11-15 11:25:12.605189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.454 [2024-11-15 11:25:12.605201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.605212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.454 [2024-11-15 11:25:12.605222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:21:35.454 [2024-11-15 11:25:12.605233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.646284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.646336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.454 [2024-11-15 11:25:12.646352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.067 ms 00:21:35.454 [2024-11-15 11:25:12.646362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.646467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.646478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.454 [2024-11-15 11:25:12.646489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:35.454 [2024-11-15 11:25:12.646499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.703192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.703247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.454 [2024-11-15 11:25:12.703262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.681 ms 00:21:35.454 [2024-11-15 11:25:12.703274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.703331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.703342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.454 [2024-11-15 11:25:12.703362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:35.454 [2024-11-15 11:25:12.703373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.703885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.703908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.454 [2024-11-15 11:25:12.703919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:21:35.454 [2024-11-15 11:25:12.703929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.704058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.704072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.454 [2024-11-15 11:25:12.704083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:35.454 [2024-11-15 11:25:12.704102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.722418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.722462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.454 [2024-11-15 11:25:12.722481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.324 ms 00:21:35.454 [2024-11-15 11:25:12.722492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.741792] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:35.454 [2024-11-15 11:25:12.741851] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.454 [2024-11-15 11:25:12.741869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.741880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.454 [2024-11-15 11:25:12.741891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.278 ms 00:21:35.454 [2024-11-15 11:25:12.741901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.771407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.771456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:35.454 [2024-11-15 11:25:12.771470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.508 ms 00:21:35.454 [2024-11-15 11:25:12.771481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.789824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.789874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:35.454 [2024-11-15 11:25:12.789887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.322 ms 00:21:35.454 [2024-11-15 11:25:12.789897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.807544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.807587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:35.454 [2024-11-15 11:25:12.807599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.638 ms 00:21:35.454 [2024-11-15 11:25:12.807610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.454 [2024-11-15 11:25:12.808445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.454 [2024-11-15 11:25:12.808479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.454 [2024-11-15 11:25:12.808491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:21:35.454 [2024-11-15 11:25:12.808501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.895352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.895416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.714 [2024-11-15 11:25:12.895433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.962 ms 00:21:35.714 [2024-11-15 11:25:12.895456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.906261] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:35.714 [2024-11-15 11:25:12.909142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.909173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.714 [2024-11-15 11:25:12.909187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.650 ms 00:21:35.714 [2024-11-15 11:25:12.909197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.909297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.909311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.714 [2024-11-15 11:25:12.909322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:35.714 [2024-11-15 11:25:12.909333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.909435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.909447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.714 [2024-11-15 11:25:12.909458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:35.714 [2024-11-15 11:25:12.909468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.909491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.909503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.714 [2024-11-15 11:25:12.909513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:35.714 [2024-11-15 11:25:12.909524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.909573] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.714 [2024-11-15 11:25:12.909595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.909611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.714 [2024-11-15 11:25:12.909622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:35.714 [2024-11-15 11:25:12.909631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.945464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.945520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.714 [2024-11-15 11:25:12.945535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.867 ms 00:21:35.714 [2024-11-15 11:25:12.945547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.945646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.714 [2024-11-15 11:25:12.945659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.714 [2024-11-15 11:25:12.945669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:35.714 [2024-11-15 11:25:12.945680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.714 [2024-11-15 11:25:12.946862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.239 ms, result 0 00:21:36.650  [2024-11-15T11:25:14.991Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-15T11:25:16.373Z] Copying: 58/1024 [MB] (28 MBps) [2024-11-15T11:25:17.309Z] Copying: 85/1024 [MB] (27 MBps) [2024-11-15T11:25:18.243Z] Copying: 116/1024 [MB] (31 MBps) [2024-11-15T11:25:19.177Z] Copying: 148/1024 [MB] (31 MBps) [2024-11-15T11:25:20.112Z] Copying: 178/1024 [MB] (30 MBps) [2024-11-15T11:25:21.048Z] Copying: 208/1024 [MB] (30 MBps) [2024-11-15T11:25:21.983Z] Copying: 238/1024 [MB] (29 MBps) [2024-11-15T11:25:23.359Z] Copying: 271/1024 [MB] (32 MBps) [2024-11-15T11:25:24.292Z] Copying: 302/1024 [MB] (31 MBps) [2024-11-15T11:25:25.228Z] Copying: 335/1024 [MB] (32 MBps) [2024-11-15T11:25:26.184Z] Copying: 369/1024 [MB] (34 MBps) [2024-11-15T11:25:27.119Z] Copying: 400/1024 [MB] (31 MBps) [2024-11-15T11:25:28.056Z] Copying: 430/1024 [MB] (30 MBps) [2024-11-15T11:25:28.994Z] Copying: 457/1024 [MB] (27 MBps) [2024-11-15T11:25:30.461Z] Copying: 486/1024 [MB] (28 MBps) [2024-11-15T11:25:31.029Z] Copying: 515/1024 [MB] (28 MBps) [2024-11-15T11:25:31.965Z] Copying: 543/1024 [MB] (28 MBps) [2024-11-15T11:25:33.342Z] Copying: 572/1024 [MB] (28 MBps) [2024-11-15T11:25:34.291Z] Copying: 600/1024 [MB] (28 MBps) [2024-11-15T11:25:35.229Z] Copying: 628/1024 [MB] (27 MBps) [2024-11-15T11:25:36.166Z] Copying: 656/1024 [MB] (27 MBps) [2024-11-15T11:25:37.104Z] Copying: 684/1024 [MB] (28 MBps) [2024-11-15T11:25:38.043Z] Copying: 711/1024 [MB] (27 MBps) [2024-11-15T11:25:38.979Z] Copying: 738/1024 [MB] (27 MBps) [2024-11-15T11:25:39.917Z] Copying: 766/1024 [MB] (27 MBps) [2024-11-15T11:25:41.295Z] Copying: 793/1024 [MB] (27 MBps) [2024-11-15T11:25:42.234Z] Copying: 821/1024 [MB] (28 MBps) [2024-11-15T11:25:43.170Z] Copying: 850/1024 [MB] (28 MBps) [2024-11-15T11:25:44.107Z] Copying: 880/1024 [MB] (30 MBps) [2024-11-15T11:25:45.043Z] Copying: 910/1024 [MB] (30 MBps) [2024-11-15T11:25:45.979Z] Copying: 940/1024 [MB] (29 MBps) [2024-11-15T11:25:46.915Z] Copying: 970/1024 [MB] (29 MBps) [2024-11-15T11:25:47.851Z] Copying: 998/1024 [MB] (28 MBps) [2024-11-15T11:25:47.851Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-15 11:25:47.820937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.450 [2024-11-15 11:25:47.820993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.450 [2024-11-15 11:25:47.821010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:10.450 [2024-11-15 11:25:47.821021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.450 [2024-11-15 11:25:47.821043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.450 [2024-11-15 11:25:47.825205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.450 [2024-11-15 11:25:47.825239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.450 [2024-11-15 11:25:47.825252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.150 ms 00:22:10.450 [2024-11-15 11:25:47.825274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.450 [2024-11-15 11:25:47.826937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.450 [2024-11-15 11:25:47.826977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.450 [2024-11-15 11:25:47.826990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:22:10.450 [2024-11-15 11:25:47.827000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.450 [2024-11-15 11:25:47.844683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.450 [2024-11-15 11:25:47.844721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.450 [2024-11-15 11:25:47.844734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.693 ms 00:22:10.450 [2024-11-15 11:25:47.844744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.450 [2024-11-15 11:25:47.849770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.450 [2024-11-15 11:25:47.849802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.450 [2024-11-15 11:25:47.849815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.987 ms 00:22:10.450 [2024-11-15 11:25:47.849826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:47.886513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.710 [2024-11-15 11:25:47.886552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.710 [2024-11-15 11:25:47.886572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.681 ms 00:22:10.710 [2024-11-15 11:25:47.886582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:47.907829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.710 [2024-11-15 11:25:47.907867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.710 [2024-11-15 11:25:47.907882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.243 ms 00:22:10.710 [2024-11-15 11:25:47.907893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:47.908025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.710 [2024-11-15 11:25:47.908038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.710 [2024-11-15 11:25:47.908060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:10.710 [2024-11-15 11:25:47.908070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:47.944586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.710 [2024-11-15 11:25:47.944623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:10.710 [2024-11-15 11:25:47.944652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.559 ms 00:22:10.710 [2024-11-15 11:25:47.944662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:47.981197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.710 [2024-11-15 11:25:47.981235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:10.710 [2024-11-15 11:25:47.981265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.557 ms 00:22:10.710 [2024-11-15 11:25:47.981275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:48.017598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.710 [2024-11-15 11:25:48.017634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.710 [2024-11-15 11:25:48.017647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.344 ms 00:22:10.710 [2024-11-15 11:25:48.017656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:48.053385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.710 [2024-11-15 11:25:48.053423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.710 [2024-11-15 11:25:48.053436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.702 ms 00:22:10.710 [2024-11-15 11:25:48.053446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.710 [2024-11-15 11:25:48.053485] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.710 [2024-11-15 11:25:48.053502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.710 [2024-11-15 11:25:48.053815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.053994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.711 [2024-11-15 11:25:48.054603] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.711 [2024-11-15 11:25:48.054622] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e66deb06-c008-4aa6-8b67-bc55c34f40dd 00:22:10.711 [2024-11-15 11:25:48.054639] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:10.711 [2024-11-15 11:25:48.054649] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:10.711 [2024-11-15 11:25:48.054659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:10.711 [2024-11-15 11:25:48.054669] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:10.711 [2024-11-15 11:25:48.054679] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.711 [2024-11-15 11:25:48.054689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.711 [2024-11-15 11:25:48.054699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.711 [2024-11-15 11:25:48.054722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.711 [2024-11-15 11:25:48.054731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.711 [2024-11-15 11:25:48.054741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.711 [2024-11-15 11:25:48.054751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.711 [2024-11-15 11:25:48.054762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:22:10.711 [2024-11-15 11:25:48.054771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.711 [2024-11-15 11:25:48.074480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.711 [2024-11-15 11:25:48.074516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.711 [2024-11-15 11:25:48.074529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.704 ms 00:22:10.711 [2024-11-15 11:25:48.074540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.711 [2024-11-15 11:25:48.075120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.712 [2024-11-15 11:25:48.075138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.712 [2024-11-15 11:25:48.075150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:22:10.712 [2024-11-15 11:25:48.075161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.126204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.126245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.988 [2024-11-15 11:25:48.126258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.126269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.126327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.126338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.988 [2024-11-15 11:25:48.126348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.126357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.126454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.126469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.988 [2024-11-15 11:25:48.126480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.126489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.126506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.126516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.988 [2024-11-15 11:25:48.126526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.126536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.252992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.253058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.988 [2024-11-15 11:25:48.253074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.253085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.355319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.355381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.988 [2024-11-15 11:25:48.355396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.355407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.355511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.355523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.988 [2024-11-15 11:25:48.355534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.355545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.355606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.355619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.988 [2024-11-15 11:25:48.355629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.355640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.355771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.355789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.988 [2024-11-15 11:25:48.355800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.355810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.355847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.355859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.988 [2024-11-15 11:25:48.355870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.355880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.355917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.988 [2024-11-15 11:25:48.355933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.988 [2024-11-15 11:25:48.355944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.988 [2024-11-15 11:25:48.355954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.988 [2024-11-15 11:25:48.355993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.989 [2024-11-15 11:25:48.356005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.989 [2024-11-15 11:25:48.356016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.989 [2024-11-15 11:25:48.356025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.989 [2024-11-15 11:25:48.356146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.040 ms, result 0 00:22:12.921 00:22:12.921 00:22:12.921 11:25:49 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:12.921 [2024-11-15 11:25:49.965203] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:22:12.921 [2024-11-15 11:25:49.965326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76912 ] 00:22:12.921 [2024-11-15 11:25:50.146176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.921 [2024-11-15 11:25:50.264813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.490 [2024-11-15 11:25:50.649115] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.490 [2024-11-15 11:25:50.649189] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.490 [2024-11-15 11:25:50.810260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.810316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.490 [2024-11-15 11:25:50.810338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:13.490 [2024-11-15 11:25:50.810348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.810399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.810412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.490 [2024-11-15 11:25:50.810425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:13.490 [2024-11-15 11:25:50.810435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.810457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.490 [2024-11-15 11:25:50.811451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.490 [2024-11-15 11:25:50.811479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.811490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.490 [2024-11-15 11:25:50.811501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:22:13.490 [2024-11-15 11:25:50.811510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.812979] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:13.490 [2024-11-15 11:25:50.831986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.832025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:13.490 [2024-11-15 11:25:50.832041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.038 ms 00:22:13.490 [2024-11-15 11:25:50.832051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.832120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.832133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:13.490 [2024-11-15 11:25:50.832145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:13.490 [2024-11-15 11:25:50.832156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.839001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.839031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.490 [2024-11-15 11:25:50.839052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:22:13.490 [2024-11-15 11:25:50.839067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.839149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.839163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.490 [2024-11-15 11:25:50.839173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:13.490 [2024-11-15 11:25:50.839183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.839226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.839238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.490 [2024-11-15 11:25:50.839248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:13.490 [2024-11-15 11:25:50.839258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.839287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.490 [2024-11-15 11:25:50.844105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.844135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.490 [2024-11-15 11:25:50.844147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.836 ms 00:22:13.490 [2024-11-15 11:25:50.844161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.844191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.490 [2024-11-15 11:25:50.844203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.490 [2024-11-15 11:25:50.844214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.490 [2024-11-15 11:25:50.844224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.490 [2024-11-15 11:25:50.844280] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:13.490 [2024-11-15 11:25:50.844303] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:13.490 [2024-11-15 11:25:50.844340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:13.490 [2024-11-15 11:25:50.844362] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:13.490 [2024-11-15 11:25:50.844452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.490 [2024-11-15 11:25:50.844465] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.490 [2024-11-15 11:25:50.844478] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:13.490 [2024-11-15 11:25:50.844491] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.491 [2024-11-15 11:25:50.844504] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.491 [2024-11-15 11:25:50.844516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:13.491 [2024-11-15 11:25:50.844526] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.491 [2024-11-15 11:25:50.844535] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.491 [2024-11-15 11:25:50.844548] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.491 [2024-11-15 11:25:50.844579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.491 [2024-11-15 11:25:50.844590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.491 [2024-11-15 11:25:50.844601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:22:13.491 [2024-11-15 11:25:50.844611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.491 [2024-11-15 11:25:50.844683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.491 [2024-11-15 11:25:50.844694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.491 [2024-11-15 11:25:50.844704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:13.491 [2024-11-15 11:25:50.844714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.491 [2024-11-15 11:25:50.844811] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.491 [2024-11-15 11:25:50.844826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.491 [2024-11-15 11:25:50.844837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.491 [2024-11-15 11:25:50.844847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.491 [2024-11-15 11:25:50.844859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.491 [2024-11-15 11:25:50.844868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.491 [2024-11-15 11:25:50.844878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:13.491 [2024-11-15 11:25:50.844889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.491 [2024-11-15 11:25:50.844898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:13.491 [2024-11-15 11:25:50.844907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.491 [2024-11-15 11:25:50.844917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.491 [2024-11-15 11:25:50.844928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:13.491 [2024-11-15 11:25:50.844937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.491 [2024-11-15 11:25:50.844946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.491 [2024-11-15 11:25:50.844956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:13.491 [2024-11-15 11:25:50.844974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.491 [2024-11-15 11:25:50.844984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.491 [2024-11-15 11:25:50.844993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:13.491 [2024-11-15 11:25:50.845003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.491 [2024-11-15 11:25:50.845021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.491 [2024-11-15 11:25:50.845039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.491 [2024-11-15 11:25:50.845049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.491 [2024-11-15 11:25:50.845067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.491 [2024-11-15 11:25:50.845077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.491 [2024-11-15 11:25:50.845095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.491 [2024-11-15 11:25:50.845104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.491 [2024-11-15 11:25:50.845122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.491 [2024-11-15 11:25:50.845131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.491 [2024-11-15 11:25:50.845149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.491 [2024-11-15 11:25:50.845158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:13.491 [2024-11-15 11:25:50.845167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.491 [2024-11-15 11:25:50.845176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.491 [2024-11-15 11:25:50.845185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:13.491 [2024-11-15 11:25:50.845193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.491 [2024-11-15 11:25:50.845211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:13.491 [2024-11-15 11:25:50.845221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845230] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.491 [2024-11-15 11:25:50.845240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.491 [2024-11-15 11:25:50.845250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.491 [2024-11-15 11:25:50.845259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.491 [2024-11-15 11:25:50.845270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.491 [2024-11-15 11:25:50.845279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.491 [2024-11-15 11:25:50.845288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.491 [2024-11-15 11:25:50.845297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.491 [2024-11-15 11:25:50.845306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.491 [2024-11-15 11:25:50.845315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.491 [2024-11-15 11:25:50.845326] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.491 [2024-11-15 11:25:50.845338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.491 [2024-11-15 11:25:50.845350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:13.491 [2024-11-15 11:25:50.845360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:13.491 [2024-11-15 11:25:50.845370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:13.491 [2024-11-15 11:25:50.845380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:13.491 [2024-11-15 11:25:50.845390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:13.491 [2024-11-15 11:25:50.845400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:13.491 [2024-11-15 11:25:50.845410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:13.491 [2024-11-15 11:25:50.845420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:13.491 [2024-11-15 11:25:50.845429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:13.491 [2024-11-15 11:25:50.845439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:13.491 [2024-11-15 11:25:50.845449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:13.491 [2024-11-15 11:25:50.845459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:13.491 [2024-11-15 11:25:50.845469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:13.491 [2024-11-15 11:25:50.845479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:13.491 [2024-11-15 11:25:50.845489] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.491 [2024-11-15 11:25:50.845504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.491 [2024-11-15 11:25:50.845515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.491 [2024-11-15 11:25:50.845526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.491 [2024-11-15 11:25:50.845536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.491 [2024-11-15 11:25:50.845546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.491 [2024-11-15 11:25:50.845567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.491 [2024-11-15 11:25:50.845579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.491 [2024-11-15 11:25:50.845589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:22:13.491 [2024-11-15 11:25:50.845599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.491 [2024-11-15 11:25:50.885200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.491 [2024-11-15 11:25:50.885241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.491 [2024-11-15 11:25:50.885257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.615 ms 00:22:13.491 [2024-11-15 11:25:50.885267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.491 [2024-11-15 11:25:50.885362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.491 [2024-11-15 11:25:50.885374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:13.491 [2024-11-15 11:25:50.885385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:13.492 [2024-11-15 11:25:50.885395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:50.943784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:50.943821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.750 [2024-11-15 11:25:50.943836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.409 ms 00:22:13.750 [2024-11-15 11:25:50.943847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:50.943901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:50.943912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.750 [2024-11-15 11:25:50.943927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:13.750 [2024-11-15 11:25:50.943938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:50.944429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:50.944449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.750 [2024-11-15 11:25:50.944460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:22:13.750 [2024-11-15 11:25:50.944471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:50.944605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:50.944618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.750 [2024-11-15 11:25:50.944630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:22:13.750 [2024-11-15 11:25:50.944645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:50.964608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:50.964646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.750 [2024-11-15 11:25:50.964665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.973 ms 00:22:13.750 [2024-11-15 11:25:50.964676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:50.983745] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:13.750 [2024-11-15 11:25:50.983780] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:13.750 [2024-11-15 11:25:50.983796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:50.983808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:13.750 [2024-11-15 11:25:50.983820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.027 ms 00:22:13.750 [2024-11-15 11:25:50.983830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:51.014216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:51.014254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:13.750 [2024-11-15 11:25:51.014270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.389 ms 00:22:13.750 [2024-11-15 11:25:51.014282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:51.032405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:51.032457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:13.750 [2024-11-15 11:25:51.032471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.106 ms 00:22:13.750 [2024-11-15 11:25:51.032481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:51.050401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:51.050434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:13.750 [2024-11-15 11:25:51.050447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.908 ms 00:22:13.750 [2024-11-15 11:25:51.050457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:51.051271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:51.051300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:13.750 [2024-11-15 11:25:51.051312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:22:13.750 [2024-11-15 11:25:51.051326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.750 [2024-11-15 11:25:51.138789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.750 [2024-11-15 11:25:51.138854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:13.750 [2024-11-15 11:25:51.138880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.582 ms 00:22:13.750 [2024-11-15 11:25:51.138891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.009 [2024-11-15 11:25:51.150633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:14.009 [2024-11-15 11:25:51.153819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.009 [2024-11-15 11:25:51.153852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:14.009 [2024-11-15 11:25:51.153867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.871 ms 00:22:14.009 [2024-11-15 11:25:51.153879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.009 [2024-11-15 11:25:51.153993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.009 [2024-11-15 11:25:51.154007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:14.009 [2024-11-15 11:25:51.154018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:14.009 [2024-11-15 11:25:51.154032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.009 [2024-11-15 11:25:51.154127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.009 [2024-11-15 11:25:51.154140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.009 [2024-11-15 11:25:51.154151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:14.009 [2024-11-15 11:25:51.154173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.009 [2024-11-15 11:25:51.154199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.009 [2024-11-15 11:25:51.154210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.009 [2024-11-15 11:25:51.154221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:14.009 [2024-11-15 11:25:51.154231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.009 [2024-11-15 11:25:51.154270] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:14.009 [2024-11-15 11:25:51.154283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.009 [2024-11-15 11:25:51.154293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:14.009 [2024-11-15 11:25:51.154303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:14.009 [2024-11-15 11:25:51.154313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.009 [2024-11-15 11:25:51.190440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.009 [2024-11-15 11:25:51.190484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.009 [2024-11-15 11:25:51.190501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.164 ms 00:22:14.009 [2024-11-15 11:25:51.190518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.009 [2024-11-15 11:25:51.190628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.010 [2024-11-15 11:25:51.190642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.010 [2024-11-15 11:25:51.190654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:14.010 [2024-11-15 11:25:51.190664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.010 [2024-11-15 11:25:51.191866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 381.716 ms, result 0 00:22:15.382  [2024-11-15T11:25:53.721Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-15T11:25:54.657Z] Copying: 62/1024 [MB] (30 MBps) [2024-11-15T11:25:55.593Z] Copying: 91/1024 [MB] (28 MBps) [2024-11-15T11:25:56.537Z] Copying: 119/1024 [MB] (28 MBps) [2024-11-15T11:25:57.492Z] Copying: 147/1024 [MB] (28 MBps) [2024-11-15T11:25:58.868Z] Copying: 176/1024 [MB] (28 MBps) [2024-11-15T11:25:59.804Z] Copying: 205/1024 [MB] (28 MBps) [2024-11-15T11:26:00.741Z] Copying: 236/1024 [MB] (31 MBps) [2024-11-15T11:26:01.677Z] Copying: 266/1024 [MB] (29 MBps) [2024-11-15T11:26:02.615Z] Copying: 295/1024 [MB] (28 MBps) [2024-11-15T11:26:03.610Z] Copying: 324/1024 [MB] (29 MBps) [2024-11-15T11:26:04.549Z] Copying: 353/1024 [MB] (28 MBps) [2024-11-15T11:26:05.485Z] Copying: 382/1024 [MB] (29 MBps) [2024-11-15T11:26:06.863Z] Copying: 412/1024 [MB] (29 MBps) [2024-11-15T11:26:07.799Z] Copying: 442/1024 [MB] (30 MBps) [2024-11-15T11:26:08.738Z] Copying: 473/1024 [MB] (30 MBps) [2024-11-15T11:26:09.677Z] Copying: 502/1024 [MB] (29 MBps) [2024-11-15T11:26:10.613Z] Copying: 530/1024 [MB] (28 MBps) [2024-11-15T11:26:11.548Z] Copying: 559/1024 [MB] (28 MBps) [2024-11-15T11:26:12.486Z] Copying: 587/1024 [MB] (28 MBps) [2024-11-15T11:26:13.860Z] Copying: 616/1024 [MB] (28 MBps) [2024-11-15T11:26:14.795Z] Copying: 644/1024 [MB] (28 MBps) [2024-11-15T11:26:15.729Z] Copying: 672/1024 [MB] (27 MBps) [2024-11-15T11:26:16.664Z] Copying: 700/1024 [MB] (27 MBps) [2024-11-15T11:26:17.599Z] Copying: 728/1024 [MB] (28 MBps) [2024-11-15T11:26:18.535Z] Copying: 756/1024 [MB] (27 MBps) [2024-11-15T11:26:19.470Z] Copying: 784/1024 [MB] (28 MBps) [2024-11-15T11:26:20.877Z] Copying: 812/1024 [MB] (27 MBps) [2024-11-15T11:26:21.443Z] Copying: 839/1024 [MB] (27 MBps) [2024-11-15T11:26:22.821Z] Copying: 866/1024 [MB] (26 MBps) [2024-11-15T11:26:23.773Z] Copying: 892/1024 [MB] (26 MBps) [2024-11-15T11:26:24.709Z] Copying: 920/1024 [MB] (27 MBps) [2024-11-15T11:26:25.644Z] Copying: 948/1024 [MB] (28 MBps) [2024-11-15T11:26:26.580Z] Copying: 975/1024 [MB] (26 MBps) [2024-11-15T11:26:27.514Z] Copying: 1001/1024 [MB] (26 MBps) [2024-11-15T11:26:27.514Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-15 11:26:27.421472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.421575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:50.113 [2024-11-15 11:26:27.421600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:50.113 [2024-11-15 11:26:27.421617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.113 [2024-11-15 11:26:27.421652] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:50.113 [2024-11-15 11:26:27.426763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.426807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:50.113 [2024-11-15 11:26:27.426829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.094 ms 00:22:50.113 [2024-11-15 11:26:27.426840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.113 [2024-11-15 11:26:27.427064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.427077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:50.113 [2024-11-15 11:26:27.427089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:22:50.113 [2024-11-15 11:26:27.427100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.113 [2024-11-15 11:26:27.430696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.430727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:50.113 [2024-11-15 11:26:27.430740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:22:50.113 [2024-11-15 11:26:27.430752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.113 [2024-11-15 11:26:27.436326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.436365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:50.113 [2024-11-15 11:26:27.436378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.554 ms 00:22:50.113 [2024-11-15 11:26:27.436388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.113 [2024-11-15 11:26:27.474037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.474105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:50.113 [2024-11-15 11:26:27.474122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.619 ms 00:22:50.113 [2024-11-15 11:26:27.474133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.113 [2024-11-15 11:26:27.496437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.496500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:50.113 [2024-11-15 11:26:27.496518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.291 ms 00:22:50.113 [2024-11-15 11:26:27.496529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.113 [2024-11-15 11:26:27.496701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.113 [2024-11-15 11:26:27.496726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:50.113 [2024-11-15 11:26:27.496738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:50.113 [2024-11-15 11:26:27.496747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.371 [2024-11-15 11:26:27.535186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.371 [2024-11-15 11:26:27.535246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:50.371 [2024-11-15 11:26:27.535263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.480 ms 00:22:50.371 [2024-11-15 11:26:27.535274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.371 [2024-11-15 11:26:27.572530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.371 [2024-11-15 11:26:27.572617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:50.371 [2024-11-15 11:26:27.572633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.281 ms 00:22:50.371 [2024-11-15 11:26:27.572644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.371 [2024-11-15 11:26:27.609380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.371 [2024-11-15 11:26:27.609441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:50.371 [2024-11-15 11:26:27.609459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.761 ms 00:22:50.371 [2024-11-15 11:26:27.609469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.371 [2024-11-15 11:26:27.647043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.371 [2024-11-15 11:26:27.647124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:50.371 [2024-11-15 11:26:27.647142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.529 ms 00:22:50.371 [2024-11-15 11:26:27.647152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.371 [2024-11-15 11:26:27.647189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:50.371 [2024-11-15 11:26:27.647206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:50.371 [2024-11-15 11:26:27.647572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.647998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:50.372 [2024-11-15 11:26:27.648284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:50.372 [2024-11-15 11:26:27.648299] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e66deb06-c008-4aa6-8b67-bc55c34f40dd 00:22:50.372 [2024-11-15 11:26:27.648310] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:50.372 [2024-11-15 11:26:27.648320] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:50.372 [2024-11-15 11:26:27.648330] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:50.372 [2024-11-15 11:26:27.648340] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:50.372 [2024-11-15 11:26:27.648350] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:50.372 [2024-11-15 11:26:27.648360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:50.372 [2024-11-15 11:26:27.648380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:50.372 [2024-11-15 11:26:27.648389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:50.372 [2024-11-15 11:26:27.648398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:50.372 [2024-11-15 11:26:27.648408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.372 [2024-11-15 11:26:27.648420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:50.372 [2024-11-15 11:26:27.648430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:22:50.372 [2024-11-15 11:26:27.648440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.372 [2024-11-15 11:26:27.668637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.372 [2024-11-15 11:26:27.668697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:50.372 [2024-11-15 11:26:27.668712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.177 ms 00:22:50.372 [2024-11-15 11:26:27.668722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.372 [2024-11-15 11:26:27.669239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.372 [2024-11-15 11:26:27.669257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:50.372 [2024-11-15 11:26:27.669269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:22:50.372 [2024-11-15 11:26:27.669286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.372 [2024-11-15 11:26:27.721067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.372 [2024-11-15 11:26:27.721131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.372 [2024-11-15 11:26:27.721147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.372 [2024-11-15 11:26:27.721159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.372 [2024-11-15 11:26:27.721234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.373 [2024-11-15 11:26:27.721245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.373 [2024-11-15 11:26:27.721256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.373 [2024-11-15 11:26:27.721272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.373 [2024-11-15 11:26:27.721358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.373 [2024-11-15 11:26:27.721372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.373 [2024-11-15 11:26:27.721382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.373 [2024-11-15 11:26:27.721392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.373 [2024-11-15 11:26:27.721409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.373 [2024-11-15 11:26:27.721421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.373 [2024-11-15 11:26:27.721431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.373 [2024-11-15 11:26:27.721440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.846752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.846824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.631 [2024-11-15 11:26:27.846840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.631 [2024-11-15 11:26:27.846851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.948513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.948595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.631 [2024-11-15 11:26:27.948611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.631 [2024-11-15 11:26:27.948630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.948729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.948741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.631 [2024-11-15 11:26:27.948752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.631 [2024-11-15 11:26:27.948762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.948807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.948819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.631 [2024-11-15 11:26:27.948829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.631 [2024-11-15 11:26:27.948839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.948966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.948980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.631 [2024-11-15 11:26:27.948991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.631 [2024-11-15 11:26:27.949000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.949037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.949049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:50.631 [2024-11-15 11:26:27.949060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.631 [2024-11-15 11:26:27.949070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.949114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.949125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.631 [2024-11-15 11:26:27.949136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.631 [2024-11-15 11:26:27.949146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.631 [2024-11-15 11:26:27.949188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.631 [2024-11-15 11:26:27.949199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.631 [2024-11-15 11:26:27.949209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.632 [2024-11-15 11:26:27.949219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.632 [2024-11-15 11:26:27.949340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.701 ms, result 0 00:22:52.029 00:22:52.029 00:22:52.029 11:26:29 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:53.475 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:53.475 11:26:30 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:53.734 [2024-11-15 11:26:30.909591] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:22:53.734 [2024-11-15 11:26:30.909722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77335 ] 00:22:53.734 [2024-11-15 11:26:31.087950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.993 [2024-11-15 11:26:31.205916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.251 [2024-11-15 11:26:31.584096] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:54.251 [2024-11-15 11:26:31.584169] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:54.511 [2024-11-15 11:26:31.745437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.511 [2024-11-15 11:26:31.745496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:54.511 [2024-11-15 11:26:31.745519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.512 [2024-11-15 11:26:31.745532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.745603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.745617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.512 [2024-11-15 11:26:31.745631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:54.512 [2024-11-15 11:26:31.745642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.745664] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:54.512 [2024-11-15 11:26:31.746801] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:54.512 [2024-11-15 11:26:31.746847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.746858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.512 [2024-11-15 11:26:31.746869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:22:54.512 [2024-11-15 11:26:31.746879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.748318] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:54.512 [2024-11-15 11:26:31.767166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.767213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:54.512 [2024-11-15 11:26:31.767229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.878 ms 00:22:54.512 [2024-11-15 11:26:31.767241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.767326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.767339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:54.512 [2024-11-15 11:26:31.767350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:54.512 [2024-11-15 11:26:31.767360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.774411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.774449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.512 [2024-11-15 11:26:31.774462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:22:54.512 [2024-11-15 11:26:31.774479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.774573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.774589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.512 [2024-11-15 11:26:31.774601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:54.512 [2024-11-15 11:26:31.774612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.774662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.774674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:54.512 [2024-11-15 11:26:31.774685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:54.512 [2024-11-15 11:26:31.774694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.774724] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:54.512 [2024-11-15 11:26:31.779575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.779605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.512 [2024-11-15 11:26:31.779618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:22:54.512 [2024-11-15 11:26:31.779632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.779665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.779675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:54.512 [2024-11-15 11:26:31.779686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:54.512 [2024-11-15 11:26:31.779696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.779758] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:54.512 [2024-11-15 11:26:31.779782] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:54.512 [2024-11-15 11:26:31.779819] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:54.512 [2024-11-15 11:26:31.779841] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:54.512 [2024-11-15 11:26:31.779933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:54.512 [2024-11-15 11:26:31.779946] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:54.512 [2024-11-15 11:26:31.779958] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:54.512 [2024-11-15 11:26:31.779971] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:54.512 [2024-11-15 11:26:31.779983] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:54.512 [2024-11-15 11:26:31.779994] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:54.512 [2024-11-15 11:26:31.780003] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:54.512 [2024-11-15 11:26:31.780014] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:54.512 [2024-11-15 11:26:31.780027] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:54.512 [2024-11-15 11:26:31.780037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.780048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:54.512 [2024-11-15 11:26:31.780058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:22:54.512 [2024-11-15 11:26:31.780068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.780142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.512 [2024-11-15 11:26:31.780153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:54.512 [2024-11-15 11:26:31.780163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:54.512 [2024-11-15 11:26:31.780173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.512 [2024-11-15 11:26:31.780271] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:54.512 [2024-11-15 11:26:31.780290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:54.512 [2024-11-15 11:26:31.780301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.512 [2024-11-15 11:26:31.780312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:54.512 [2024-11-15 11:26:31.780333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:54.512 [2024-11-15 11:26:31.780351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:54.512 [2024-11-15 11:26:31.780361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.512 [2024-11-15 11:26:31.780379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:54.512 [2024-11-15 11:26:31.780388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:54.512 [2024-11-15 11:26:31.780397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.512 [2024-11-15 11:26:31.780406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:54.512 [2024-11-15 11:26:31.780415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:54.512 [2024-11-15 11:26:31.780435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:54.512 [2024-11-15 11:26:31.780454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:54.512 [2024-11-15 11:26:31.780463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:54.512 [2024-11-15 11:26:31.780482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.512 [2024-11-15 11:26:31.780499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:54.512 [2024-11-15 11:26:31.780508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.512 [2024-11-15 11:26:31.780526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:54.512 [2024-11-15 11:26:31.780535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.512 [2024-11-15 11:26:31.780553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:54.512 [2024-11-15 11:26:31.780574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.512 [2024-11-15 11:26:31.780592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:54.512 [2024-11-15 11:26:31.780602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:54.512 [2024-11-15 11:26:31.780611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.512 [2024-11-15 11:26:31.780620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:54.512 [2024-11-15 11:26:31.780629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:54.513 [2024-11-15 11:26:31.780638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.513 [2024-11-15 11:26:31.780650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:54.513 [2024-11-15 11:26:31.780660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:54.513 [2024-11-15 11:26:31.780668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.513 [2024-11-15 11:26:31.780677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:54.513 [2024-11-15 11:26:31.780687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:54.513 [2024-11-15 11:26:31.780695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.513 [2024-11-15 11:26:31.780704] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:54.513 [2024-11-15 11:26:31.780714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:54.513 [2024-11-15 11:26:31.780724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.513 [2024-11-15 11:26:31.780734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.513 [2024-11-15 11:26:31.780745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:54.513 [2024-11-15 11:26:31.780754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:54.513 [2024-11-15 11:26:31.780763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:54.513 [2024-11-15 11:26:31.780772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:54.513 [2024-11-15 11:26:31.780780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:54.513 [2024-11-15 11:26:31.780789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:54.513 [2024-11-15 11:26:31.780800] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:54.513 [2024-11-15 11:26:31.780812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.513 [2024-11-15 11:26:31.780823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:54.513 [2024-11-15 11:26:31.780834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:54.513 [2024-11-15 11:26:31.780844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:54.513 [2024-11-15 11:26:31.780855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:54.513 [2024-11-15 11:26:31.780865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:54.513 [2024-11-15 11:26:31.780875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:54.513 [2024-11-15 11:26:31.780885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:54.513 [2024-11-15 11:26:31.780895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:54.513 [2024-11-15 11:26:31.780905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:54.513 [2024-11-15 11:26:31.780915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:54.513 [2024-11-15 11:26:31.780926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:54.513 [2024-11-15 11:26:31.780936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:54.513 [2024-11-15 11:26:31.780947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:54.513 [2024-11-15 11:26:31.780957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:54.513 [2024-11-15 11:26:31.780969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:54.513 [2024-11-15 11:26:31.780985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.513 [2024-11-15 11:26:31.780997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:54.513 [2024-11-15 11:26:31.781008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:54.513 [2024-11-15 11:26:31.781018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:54.513 [2024-11-15 11:26:31.781028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:54.513 [2024-11-15 11:26:31.781039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.781050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:54.513 [2024-11-15 11:26:31.781060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:22:54.513 [2024-11-15 11:26:31.781070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-11-15 11:26:31.818212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.818267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.513 [2024-11-15 11:26:31.818283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.151 ms 00:22:54.513 [2024-11-15 11:26:31.818295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-11-15 11:26:31.818405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.818417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:54.513 [2024-11-15 11:26:31.818428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:54.513 [2024-11-15 11:26:31.818438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-11-15 11:26:31.876889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.876946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.513 [2024-11-15 11:26:31.876962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.463 ms 00:22:54.513 [2024-11-15 11:26:31.876974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-11-15 11:26:31.877041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.877052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.513 [2024-11-15 11:26:31.877069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.513 [2024-11-15 11:26:31.877079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-11-15 11:26:31.877593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.877608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.513 [2024-11-15 11:26:31.877620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:22:54.513 [2024-11-15 11:26:31.877630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-11-15 11:26:31.877759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.877773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.513 [2024-11-15 11:26:31.877784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:54.513 [2024-11-15 11:26:31.877800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.513 [2024-11-15 11:26:31.896865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.513 [2024-11-15 11:26:31.896918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.513 [2024-11-15 11:26:31.896937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.072 ms 00:22:54.513 [2024-11-15 11:26:31.896948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:31.916528] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:54.773 [2024-11-15 11:26:31.916586] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:54.773 [2024-11-15 11:26:31.916604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:31.916616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:54.773 [2024-11-15 11:26:31.916629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.543 ms 00:22:54.773 [2024-11-15 11:26:31.916639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:31.947026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:31.947093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:54.773 [2024-11-15 11:26:31.947110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.375 ms 00:22:54.773 [2024-11-15 11:26:31.947121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:31.966341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:31.966398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:54.773 [2024-11-15 11:26:31.966414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.169 ms 00:22:54.773 [2024-11-15 11:26:31.966424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:31.985217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:31.985276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:54.773 [2024-11-15 11:26:31.985292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.767 ms 00:22:54.773 [2024-11-15 11:26:31.985303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:31.986167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:31.986195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:54.773 [2024-11-15 11:26:31.986207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:22:54.773 [2024-11-15 11:26:31.986222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.072987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.073053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:54.773 [2024-11-15 11:26:32.073078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.872 ms 00:22:54.773 [2024-11-15 11:26:32.073089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.085974] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:54.773 [2024-11-15 11:26:32.089222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.089261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:54.773 [2024-11-15 11:26:32.089276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.082 ms 00:22:54.773 [2024-11-15 11:26:32.089287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.089414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.089429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:54.773 [2024-11-15 11:26:32.089440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:54.773 [2024-11-15 11:26:32.089455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.089549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.089573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:54.773 [2024-11-15 11:26:32.089585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:54.773 [2024-11-15 11:26:32.089595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.089622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.089633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:54.773 [2024-11-15 11:26:32.089644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:54.773 [2024-11-15 11:26:32.089654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.089694] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:54.773 [2024-11-15 11:26:32.089705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.089716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:54.773 [2024-11-15 11:26:32.089726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:54.773 [2024-11-15 11:26:32.089747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.127393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.127455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:54.773 [2024-11-15 11:26:32.127471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.684 ms 00:22:54.773 [2024-11-15 11:26:32.127490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.127600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.773 [2024-11-15 11:26:32.127617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:54.773 [2024-11-15 11:26:32.127629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:54.773 [2024-11-15 11:26:32.127639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.773 [2024-11-15 11:26:32.128859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.538 ms, result 0 00:22:56.150  [2024-11-15T11:26:34.487Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-15T11:26:35.423Z] Copying: 54/1024 [MB] (26 MBps) [2024-11-15T11:26:36.366Z] Copying: 81/1024 [MB] (27 MBps) [2024-11-15T11:26:37.301Z] Copying: 107/1024 [MB] (26 MBps) [2024-11-15T11:26:38.238Z] Copying: 134/1024 [MB] (27 MBps) [2024-11-15T11:26:39.175Z] Copying: 161/1024 [MB] (27 MBps) [2024-11-15T11:26:40.554Z] Copying: 188/1024 [MB] (26 MBps) [2024-11-15T11:26:41.492Z] Copying: 216/1024 [MB] (27 MBps) [2024-11-15T11:26:42.426Z] Copying: 243/1024 [MB] (27 MBps) [2024-11-15T11:26:43.361Z] Copying: 271/1024 [MB] (28 MBps) [2024-11-15T11:26:44.348Z] Copying: 300/1024 [MB] (28 MBps) [2024-11-15T11:26:45.283Z] Copying: 328/1024 [MB] (28 MBps) [2024-11-15T11:26:46.220Z] Copying: 356/1024 [MB] (27 MBps) [2024-11-15T11:26:47.157Z] Copying: 384/1024 [MB] (28 MBps) [2024-11-15T11:26:48.536Z] Copying: 411/1024 [MB] (27 MBps) [2024-11-15T11:26:49.473Z] Copying: 438/1024 [MB] (26 MBps) [2024-11-15T11:26:50.406Z] Copying: 466/1024 [MB] (27 MBps) [2024-11-15T11:26:51.343Z] Copying: 488/1024 [MB] (22 MBps) [2024-11-15T11:26:52.306Z] Copying: 516/1024 [MB] (28 MBps) [2024-11-15T11:26:53.245Z] Copying: 544/1024 [MB] (27 MBps) [2024-11-15T11:26:54.183Z] Copying: 572/1024 [MB] (28 MBps) [2024-11-15T11:26:55.120Z] Copying: 600/1024 [MB] (28 MBps) [2024-11-15T11:26:56.499Z] Copying: 630/1024 [MB] (29 MBps) [2024-11-15T11:26:57.437Z] Copying: 659/1024 [MB] (29 MBps) [2024-11-15T11:26:58.373Z] Copying: 688/1024 [MB] (28 MBps) [2024-11-15T11:26:59.358Z] Copying: 715/1024 [MB] (27 MBps) [2024-11-15T11:27:00.297Z] Copying: 743/1024 [MB] (27 MBps) [2024-11-15T11:27:01.233Z] Copying: 772/1024 [MB] (29 MBps) [2024-11-15T11:27:02.170Z] Copying: 802/1024 [MB] (29 MBps) [2024-11-15T11:27:03.107Z] Copying: 830/1024 [MB] (28 MBps) [2024-11-15T11:27:04.484Z] Copying: 859/1024 [MB] (28 MBps) [2024-11-15T11:27:05.421Z] Copying: 888/1024 [MB] (29 MBps) [2024-11-15T11:27:06.358Z] Copying: 918/1024 [MB] (30 MBps) [2024-11-15T11:27:07.296Z] Copying: 946/1024 [MB] (27 MBps) [2024-11-15T11:27:08.233Z] Copying: 972/1024 [MB] (26 MBps) [2024-11-15T11:27:09.170Z] Copying: 999/1024 [MB] (26 MBps) [2024-11-15T11:27:10.104Z] Copying: 1023/1024 [MB] (23 MBps) [2024-11-15T11:27:10.104Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-15 11:27:09.845474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:09.845567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:32.703 [2024-11-15 11:27:09.845590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:32.703 [2024-11-15 11:27:09.845616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.703 [2024-11-15 11:27:09.847134] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.703 [2024-11-15 11:27:09.853496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:09.853540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:32.703 [2024-11-15 11:27:09.853566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.317 ms 00:23:32.703 [2024-11-15 11:27:09.853580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.703 [2024-11-15 11:27:09.865201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:09.865266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:32.703 [2024-11-15 11:27:09.865284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.525 ms 00:23:32.703 [2024-11-15 11:27:09.865305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.703 [2024-11-15 11:27:09.889940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:09.889988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:32.703 [2024-11-15 11:27:09.890005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.651 ms 00:23:32.703 [2024-11-15 11:27:09.890019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.703 [2024-11-15 11:27:09.895052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:09.895092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:32.703 [2024-11-15 11:27:09.895106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.000 ms 00:23:32.703 [2024-11-15 11:27:09.895119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.703 [2024-11-15 11:27:09.934245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:09.934293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:32.703 [2024-11-15 11:27:09.934311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.110 ms 00:23:32.703 [2024-11-15 11:27:09.934324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.703 [2024-11-15 11:27:09.956952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:09.957008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:32.703 [2024-11-15 11:27:09.957025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.621 ms 00:23:32.703 [2024-11-15 11:27:09.957038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.703 [2024-11-15 11:27:10.075581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.703 [2024-11-15 11:27:10.075725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:32.703 [2024-11-15 11:27:10.075754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.665 ms 00:23:32.703 [2024-11-15 11:27:10.075769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.963 [2024-11-15 11:27:10.114633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.963 [2024-11-15 11:27:10.114701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:32.963 [2024-11-15 11:27:10.114722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.898 ms 00:23:32.963 [2024-11-15 11:27:10.114736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.963 [2024-11-15 11:27:10.150601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.963 [2024-11-15 11:27:10.150677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:32.963 [2024-11-15 11:27:10.150695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.871 ms 00:23:32.963 [2024-11-15 11:27:10.150708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.963 [2024-11-15 11:27:10.186205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.963 [2024-11-15 11:27:10.186261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:32.963 [2024-11-15 11:27:10.186279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.502 ms 00:23:32.963 [2024-11-15 11:27:10.186291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.963 [2024-11-15 11:27:10.221605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.963 [2024-11-15 11:27:10.221655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:32.963 [2024-11-15 11:27:10.221672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.274 ms 00:23:32.963 [2024-11-15 11:27:10.221684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.963 [2024-11-15 11:27:10.221730] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:32.963 [2024-11-15 11:27:10.221753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120832 / 261120 wr_cnt: 1 state: open 00:23:32.963 [2024-11-15 11:27:10.221770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.221994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:32.963 [2024-11-15 11:27:10.222123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.222988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:32.964 [2024-11-15 11:27:10.223116] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:32.964 [2024-11-15 11:27:10.223128] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e66deb06-c008-4aa6-8b67-bc55c34f40dd 00:23:32.964 [2024-11-15 11:27:10.223142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120832 00:23:32.964 [2024-11-15 11:27:10.223154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121792 00:23:32.964 [2024-11-15 11:27:10.223167] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120832 00:23:32.964 [2024-11-15 11:27:10.223180] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:23:32.964 [2024-11-15 11:27:10.223192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:32.964 [2024-11-15 11:27:10.223213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:32.964 [2024-11-15 11:27:10.223242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:32.964 [2024-11-15 11:27:10.223255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:32.964 [2024-11-15 11:27:10.223266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:32.964 [2024-11-15 11:27:10.223278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.964 [2024-11-15 11:27:10.223291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:32.964 [2024-11-15 11:27:10.223304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:23:32.964 [2024-11-15 11:27:10.223317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.964 [2024-11-15 11:27:10.244232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.964 [2024-11-15 11:27:10.244278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:32.964 [2024-11-15 11:27:10.244294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.907 ms 00:23:32.964 [2024-11-15 11:27:10.244316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.964 [2024-11-15 11:27:10.244945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.964 [2024-11-15 11:27:10.244968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:32.965 [2024-11-15 11:27:10.244983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:23:32.965 [2024-11-15 11:27:10.244996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.965 [2024-11-15 11:27:10.300063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.965 [2024-11-15 11:27:10.300118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.965 [2024-11-15 11:27:10.300134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.965 [2024-11-15 11:27:10.300148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.965 [2024-11-15 11:27:10.300258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.965 [2024-11-15 11:27:10.300277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.965 [2024-11-15 11:27:10.300291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.965 [2024-11-15 11:27:10.300304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.965 [2024-11-15 11:27:10.300429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.965 [2024-11-15 11:27:10.300446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.965 [2024-11-15 11:27:10.300466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.965 [2024-11-15 11:27:10.300479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.965 [2024-11-15 11:27:10.300502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.965 [2024-11-15 11:27:10.300516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.965 [2024-11-15 11:27:10.300529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.965 [2024-11-15 11:27:10.300542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.223 [2024-11-15 11:27:10.435176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.223 [2024-11-15 11:27:10.435262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.223 [2024-11-15 11:27:10.435292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.223 [2024-11-15 11:27:10.435307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.223 [2024-11-15 11:27:10.542025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.223 [2024-11-15 11:27:10.542110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.223 [2024-11-15 11:27:10.542133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.223 [2024-11-15 11:27:10.542146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.223 [2024-11-15 11:27:10.542287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.223 [2024-11-15 11:27:10.542304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.223 [2024-11-15 11:27:10.542318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.223 [2024-11-15 11:27:10.542341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.223 [2024-11-15 11:27:10.542403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.223 [2024-11-15 11:27:10.542418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.223 [2024-11-15 11:27:10.542431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.223 [2024-11-15 11:27:10.542444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.223 [2024-11-15 11:27:10.542612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.223 [2024-11-15 11:27:10.542632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.223 [2024-11-15 11:27:10.542645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.223 [2024-11-15 11:27:10.542657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.224 [2024-11-15 11:27:10.542716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.224 [2024-11-15 11:27:10.542731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.224 [2024-11-15 11:27:10.542745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.224 [2024-11-15 11:27:10.542758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.224 [2024-11-15 11:27:10.542812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.224 [2024-11-15 11:27:10.542827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.224 [2024-11-15 11:27:10.542841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.224 [2024-11-15 11:27:10.542853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.224 [2024-11-15 11:27:10.542921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.224 [2024-11-15 11:27:10.542936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.224 [2024-11-15 11:27:10.542966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.224 [2024-11-15 11:27:10.542978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.224 [2024-11-15 11:27:10.543145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 700.550 ms, result 0 00:23:34.608 00:23:34.608 00:23:34.866 11:27:12 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:34.866 [2024-11-15 11:27:12.138138] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:23:34.866 [2024-11-15 11:27:12.138577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77748 ] 00:23:35.123 [2024-11-15 11:27:12.324674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.123 [2024-11-15 11:27:12.441990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.691 [2024-11-15 11:27:12.814132] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.691 [2024-11-15 11:27:12.814219] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.691 [2024-11-15 11:27:12.976249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:12.976315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:35.691 [2024-11-15 11:27:12.976338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:35.691 [2024-11-15 11:27:12.976349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:12.976409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:12.976421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.691 [2024-11-15 11:27:12.976435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:35.691 [2024-11-15 11:27:12.976445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:12.976466] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:35.691 [2024-11-15 11:27:12.977477] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:35.691 [2024-11-15 11:27:12.977512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:12.977523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.691 [2024-11-15 11:27:12.977535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:23:35.691 [2024-11-15 11:27:12.977545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:12.979071] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:35.691 [2024-11-15 11:27:12.998866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:12.998923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:35.691 [2024-11-15 11:27:12.998939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.826 ms 00:23:35.691 [2024-11-15 11:27:12.998951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:12.999045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:12.999059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:35.691 [2024-11-15 11:27:12.999071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:35.691 [2024-11-15 11:27:12.999081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.006453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:13.006497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.691 [2024-11-15 11:27:13.006512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.293 ms 00:23:35.691 [2024-11-15 11:27:13.006528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.006627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:13.006644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.691 [2024-11-15 11:27:13.006656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:35.691 [2024-11-15 11:27:13.006666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.006720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:13.006731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:35.691 [2024-11-15 11:27:13.006742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:35.691 [2024-11-15 11:27:13.006752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.006786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:35.691 [2024-11-15 11:27:13.011795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:13.011830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.691 [2024-11-15 11:27:13.011843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:23:35.691 [2024-11-15 11:27:13.011857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.011892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:13.011904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:35.691 [2024-11-15 11:27:13.011914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:35.691 [2024-11-15 11:27:13.011924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.011985] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:35.691 [2024-11-15 11:27:13.012011] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:35.691 [2024-11-15 11:27:13.012046] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:35.691 [2024-11-15 11:27:13.012067] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:35.691 [2024-11-15 11:27:13.012162] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:35.691 [2024-11-15 11:27:13.012183] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:35.691 [2024-11-15 11:27:13.012202] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:35.691 [2024-11-15 11:27:13.012216] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:35.691 [2024-11-15 11:27:13.012228] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:35.691 [2024-11-15 11:27:13.012239] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:35.691 [2024-11-15 11:27:13.012250] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:35.691 [2024-11-15 11:27:13.012260] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:35.691 [2024-11-15 11:27:13.012274] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:35.691 [2024-11-15 11:27:13.012285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:13.012295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:35.691 [2024-11-15 11:27:13.012306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:23:35.691 [2024-11-15 11:27:13.012316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.012397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.691 [2024-11-15 11:27:13.012415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:35.691 [2024-11-15 11:27:13.012426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:35.691 [2024-11-15 11:27:13.012436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.691 [2024-11-15 11:27:13.012536] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:35.692 [2024-11-15 11:27:13.012552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:35.692 [2024-11-15 11:27:13.012586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.692 [2024-11-15 11:27:13.012604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:35.692 [2024-11-15 11:27:13.012631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:35.692 [2024-11-15 11:27:13.012651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:35.692 [2024-11-15 11:27:13.012660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.692 [2024-11-15 11:27:13.012682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:35.692 [2024-11-15 11:27:13.012692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:35.692 [2024-11-15 11:27:13.012702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.692 [2024-11-15 11:27:13.012712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:35.692 [2024-11-15 11:27:13.012721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:35.692 [2024-11-15 11:27:13.012741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:35.692 [2024-11-15 11:27:13.012760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:35.692 [2024-11-15 11:27:13.012770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:35.692 [2024-11-15 11:27:13.012790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.692 [2024-11-15 11:27:13.012808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:35.692 [2024-11-15 11:27:13.012817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.692 [2024-11-15 11:27:13.012839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:35.692 [2024-11-15 11:27:13.012854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.692 [2024-11-15 11:27:13.012882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:35.692 [2024-11-15 11:27:13.012891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.692 [2024-11-15 11:27:13.012909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:35.692 [2024-11-15 11:27:13.012918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.692 [2024-11-15 11:27:13.012936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:35.692 [2024-11-15 11:27:13.012945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:35.692 [2024-11-15 11:27:13.012954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.692 [2024-11-15 11:27:13.012963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:35.692 [2024-11-15 11:27:13.012972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:35.692 [2024-11-15 11:27:13.012981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.692 [2024-11-15 11:27:13.012991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:35.692 [2024-11-15 11:27:13.013007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:35.692 [2024-11-15 11:27:13.013019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.692 [2024-11-15 11:27:13.013028] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:35.692 [2024-11-15 11:27:13.013038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:35.692 [2024-11-15 11:27:13.013048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.692 [2024-11-15 11:27:13.013058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.692 [2024-11-15 11:27:13.013068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:35.692 [2024-11-15 11:27:13.013078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:35.692 [2024-11-15 11:27:13.013088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:35.692 [2024-11-15 11:27:13.013097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:35.692 [2024-11-15 11:27:13.013106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:35.692 [2024-11-15 11:27:13.013115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:35.692 [2024-11-15 11:27:13.013128] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:35.692 [2024-11-15 11:27:13.013147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.692 [2024-11-15 11:27:13.013168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:35.692 [2024-11-15 11:27:13.013180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:35.692 [2024-11-15 11:27:13.013190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:35.692 [2024-11-15 11:27:13.013200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:35.692 [2024-11-15 11:27:13.013211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:35.692 [2024-11-15 11:27:13.013221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:35.692 [2024-11-15 11:27:13.013231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:35.692 [2024-11-15 11:27:13.013242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:35.692 [2024-11-15 11:27:13.013252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:35.692 [2024-11-15 11:27:13.013263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:35.692 [2024-11-15 11:27:13.013273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:35.692 [2024-11-15 11:27:13.013283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:35.692 [2024-11-15 11:27:13.013293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:35.692 [2024-11-15 11:27:13.013305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:35.692 [2024-11-15 11:27:13.013320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:35.692 [2024-11-15 11:27:13.013343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.692 [2024-11-15 11:27:13.013363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:35.692 [2024-11-15 11:27:13.013379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:35.692 [2024-11-15 11:27:13.013390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:35.692 [2024-11-15 11:27:13.013401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:35.692 [2024-11-15 11:27:13.013413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.692 [2024-11-15 11:27:13.013423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:35.692 [2024-11-15 11:27:13.013434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:23:35.692 [2024-11-15 11:27:13.013444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.692 [2024-11-15 11:27:13.053201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.692 [2024-11-15 11:27:13.053261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.692 [2024-11-15 11:27:13.053278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.765 ms 00:23:35.692 [2024-11-15 11:27:13.053290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.692 [2024-11-15 11:27:13.053397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.692 [2024-11-15 11:27:13.053409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:35.692 [2024-11-15 11:27:13.053421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:35.692 [2024-11-15 11:27:13.053431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.951 [2024-11-15 11:27:13.112815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.951 [2024-11-15 11:27:13.112873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.951 [2024-11-15 11:27:13.112890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.394 ms 00:23:35.951 [2024-11-15 11:27:13.112901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.951 [2024-11-15 11:27:13.112968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.951 [2024-11-15 11:27:13.112981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.951 [2024-11-15 11:27:13.112997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:35.951 [2024-11-15 11:27:13.113007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.951 [2024-11-15 11:27:13.113554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.951 [2024-11-15 11:27:13.113596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.951 [2024-11-15 11:27:13.113608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:23:35.951 [2024-11-15 11:27:13.113618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.951 [2024-11-15 11:27:13.113767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.951 [2024-11-15 11:27:13.113788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.951 [2024-11-15 11:27:13.113807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:23:35.951 [2024-11-15 11:27:13.113830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.132917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.132975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.952 [2024-11-15 11:27:13.132996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.090 ms 00:23:35.952 [2024-11-15 11:27:13.133007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.152605] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:35.952 [2024-11-15 11:27:13.152664] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:35.952 [2024-11-15 11:27:13.152682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.152693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:35.952 [2024-11-15 11:27:13.152707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.559 ms 00:23:35.952 [2024-11-15 11:27:13.152717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.183475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.183548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:35.952 [2024-11-15 11:27:13.183573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.743 ms 00:23:35.952 [2024-11-15 11:27:13.183584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.202914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.202988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:35.952 [2024-11-15 11:27:13.203005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.276 ms 00:23:35.952 [2024-11-15 11:27:13.203015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.221963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.222025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:35.952 [2024-11-15 11:27:13.222041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.921 ms 00:23:35.952 [2024-11-15 11:27:13.222052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.222939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.222976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:35.952 [2024-11-15 11:27:13.222990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:23:35.952 [2024-11-15 11:27:13.223006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.310781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.310853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:35.952 [2024-11-15 11:27:13.310879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.887 ms 00:23:35.952 [2024-11-15 11:27:13.310890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.324292] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:35.952 [2024-11-15 11:27:13.327650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.327694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:35.952 [2024-11-15 11:27:13.327711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.702 ms 00:23:35.952 [2024-11-15 11:27:13.327722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.327843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.327857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:35.952 [2024-11-15 11:27:13.327869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:35.952 [2024-11-15 11:27:13.327883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.329544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.329596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:35.952 [2024-11-15 11:27:13.329609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:23:35.952 [2024-11-15 11:27:13.329620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.329662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.329674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:35.952 [2024-11-15 11:27:13.329685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:35.952 [2024-11-15 11:27:13.329696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.952 [2024-11-15 11:27:13.329738] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:35.952 [2024-11-15 11:27:13.329752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.952 [2024-11-15 11:27:13.329762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:35.952 [2024-11-15 11:27:13.329772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:35.952 [2024-11-15 11:27:13.329782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.211 [2024-11-15 11:27:13.366978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.211 [2024-11-15 11:27:13.367033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.211 [2024-11-15 11:27:13.367051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.232 ms 00:23:36.211 [2024-11-15 11:27:13.367069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.211 [2024-11-15 11:27:13.367166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.211 [2024-11-15 11:27:13.367179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.211 [2024-11-15 11:27:13.367191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:36.211 [2024-11-15 11:27:13.367201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.211 [2024-11-15 11:27:13.370503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.623 ms, result 0 00:23:37.584  [2024-11-15T11:27:15.921Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-15T11:27:16.859Z] Copying: 55/1024 [MB] (28 MBps) [2024-11-15T11:27:17.837Z] Copying: 83/1024 [MB] (28 MBps) [2024-11-15T11:27:18.776Z] Copying: 111/1024 [MB] (28 MBps) [2024-11-15T11:27:19.715Z] Copying: 140/1024 [MB] (28 MBps) [2024-11-15T11:27:20.652Z] Copying: 169/1024 [MB] (28 MBps) [2024-11-15T11:27:22.030Z] Copying: 198/1024 [MB] (29 MBps) [2024-11-15T11:27:22.599Z] Copying: 227/1024 [MB] (28 MBps) [2024-11-15T11:27:23.976Z] Copying: 256/1024 [MB] (29 MBps) [2024-11-15T11:27:24.912Z] Copying: 285/1024 [MB] (29 MBps) [2024-11-15T11:27:25.849Z] Copying: 315/1024 [MB] (30 MBps) [2024-11-15T11:27:26.787Z] Copying: 345/1024 [MB] (29 MBps) [2024-11-15T11:27:27.723Z] Copying: 374/1024 [MB] (29 MBps) [2024-11-15T11:27:28.659Z] Copying: 404/1024 [MB] (30 MBps) [2024-11-15T11:27:29.605Z] Copying: 433/1024 [MB] (28 MBps) [2024-11-15T11:27:30.981Z] Copying: 462/1024 [MB] (29 MBps) [2024-11-15T11:27:31.917Z] Copying: 490/1024 [MB] (28 MBps) [2024-11-15T11:27:32.850Z] Copying: 519/1024 [MB] (28 MBps) [2024-11-15T11:27:33.786Z] Copying: 547/1024 [MB] (27 MBps) [2024-11-15T11:27:34.721Z] Copying: 575/1024 [MB] (28 MBps) [2024-11-15T11:27:35.663Z] Copying: 604/1024 [MB] (28 MBps) [2024-11-15T11:27:36.597Z] Copying: 632/1024 [MB] (28 MBps) [2024-11-15T11:27:37.970Z] Copying: 661/1024 [MB] (28 MBps) [2024-11-15T11:27:38.902Z] Copying: 689/1024 [MB] (27 MBps) [2024-11-15T11:27:39.836Z] Copying: 715/1024 [MB] (26 MBps) [2024-11-15T11:27:40.768Z] Copying: 741/1024 [MB] (26 MBps) [2024-11-15T11:27:41.701Z] Copying: 769/1024 [MB] (27 MBps) [2024-11-15T11:27:42.636Z] Copying: 795/1024 [MB] (26 MBps) [2024-11-15T11:27:43.571Z] Copying: 822/1024 [MB] (27 MBps) [2024-11-15T11:27:44.970Z] Copying: 851/1024 [MB] (28 MBps) [2024-11-15T11:27:45.915Z] Copying: 879/1024 [MB] (27 MBps) [2024-11-15T11:27:46.850Z] Copying: 910/1024 [MB] (31 MBps) [2024-11-15T11:27:47.785Z] Copying: 939/1024 [MB] (29 MBps) [2024-11-15T11:27:48.719Z] Copying: 968/1024 [MB] (29 MBps) [2024-11-15T11:27:49.759Z] Copying: 999/1024 [MB] (30 MBps) [2024-11-15T11:27:49.759Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-15 11:27:49.658538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.359 [2024-11-15 11:27:49.658630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:12.359 [2024-11-15 11:27:49.658653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.359 [2024-11-15 11:27:49.658675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.359 [2024-11-15 11:27:49.658707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:12.359 [2024-11-15 11:27:49.663660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.359 [2024-11-15 11:27:49.663702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:12.359 [2024-11-15 11:27:49.663715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:24:12.359 [2024-11-15 11:27:49.663726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.359 [2024-11-15 11:27:49.663933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.359 [2024-11-15 11:27:49.663946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:12.359 [2024-11-15 11:27:49.663958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:24:12.359 [2024-11-15 11:27:49.663968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.359 [2024-11-15 11:27:49.667636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.359 [2024-11-15 11:27:49.667683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:12.359 [2024-11-15 11:27:49.667697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.652 ms 00:24:12.359 [2024-11-15 11:27:49.667709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.359 [2024-11-15 11:27:49.672794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.359 [2024-11-15 11:27:49.672827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:12.359 [2024-11-15 11:27:49.672838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.055 ms 00:24:12.359 [2024-11-15 11:27:49.672849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.359 [2024-11-15 11:27:49.710148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.359 [2024-11-15 11:27:49.710207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:12.359 [2024-11-15 11:27:49.710222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.277 ms 00:24:12.359 [2024-11-15 11:27:49.710232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.359 [2024-11-15 11:27:49.731708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.359 [2024-11-15 11:27:49.731761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:12.359 [2024-11-15 11:27:49.731776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.465 ms 00:24:12.359 [2024-11-15 11:27:49.731787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.619 [2024-11-15 11:27:49.861673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.619 [2024-11-15 11:27:49.861763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:12.619 [2024-11-15 11:27:49.861781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 130.049 ms 00:24:12.619 [2024-11-15 11:27:49.861792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.619 [2024-11-15 11:27:49.899478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.619 [2024-11-15 11:27:49.899539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:12.619 [2024-11-15 11:27:49.899562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.725 ms 00:24:12.619 [2024-11-15 11:27:49.899574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.619 [2024-11-15 11:27:49.935880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.619 [2024-11-15 11:27:49.935937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:12.619 [2024-11-15 11:27:49.935966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.316 ms 00:24:12.619 [2024-11-15 11:27:49.935977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.619 [2024-11-15 11:27:49.971613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.619 [2024-11-15 11:27:49.971669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:12.619 [2024-11-15 11:27:49.971684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.649 ms 00:24:12.619 [2024-11-15 11:27:49.971701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.619 [2024-11-15 11:27:50.008972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.619 [2024-11-15 11:27:50.009058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:12.619 [2024-11-15 11:27:50.009075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.240 ms 00:24:12.619 [2024-11-15 11:27:50.009086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.619 [2024-11-15 11:27:50.010942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:12.619 [2024-11-15 11:27:50.010987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:12.619 [2024-11-15 11:27:50.011002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:12.619 [2024-11-15 11:27:50.011535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.011995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.012005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.012015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.012025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.012036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.012047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:12.620 [2024-11-15 11:27:50.012065] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:12.620 [2024-11-15 11:27:50.012075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e66deb06-c008-4aa6-8b67-bc55c34f40dd 00:24:12.620 [2024-11-15 11:27:50.012087] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:12.620 [2024-11-15 11:27:50.012097] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11200 00:24:12.620 [2024-11-15 11:27:50.012107] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 10240 00:24:12.620 [2024-11-15 11:27:50.012118] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0938 00:24:12.620 [2024-11-15 11:27:50.012128] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:12.620 [2024-11-15 11:27:50.012148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:12.620 [2024-11-15 11:27:50.012158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:12.620 [2024-11-15 11:27:50.012179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:12.620 [2024-11-15 11:27:50.012189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:12.620 [2024-11-15 11:27:50.012198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.620 [2024-11-15 11:27:50.012209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:12.620 [2024-11-15 11:27:50.012220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:24:12.620 [2024-11-15 11:27:50.012230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.879 [2024-11-15 11:27:50.032864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.879 [2024-11-15 11:27:50.032917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:12.879 [2024-11-15 11:27:50.032933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.611 ms 00:24:12.879 [2024-11-15 11:27:50.032953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.879 [2024-11-15 11:27:50.033473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.879 [2024-11-15 11:27:50.033494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:12.879 [2024-11-15 11:27:50.033505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:24:12.879 [2024-11-15 11:27:50.033516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.879 [2024-11-15 11:27:50.084652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.879 [2024-11-15 11:27:50.084723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.879 [2024-11-15 11:27:50.084740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.879 [2024-11-15 11:27:50.084751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.879 [2024-11-15 11:27:50.084829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.879 [2024-11-15 11:27:50.084840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.879 [2024-11-15 11:27:50.084850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.879 [2024-11-15 11:27:50.084861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.879 [2024-11-15 11:27:50.084966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.879 [2024-11-15 11:27:50.084980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.879 [2024-11-15 11:27:50.084996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.879 [2024-11-15 11:27:50.085006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.879 [2024-11-15 11:27:50.085024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.879 [2024-11-15 11:27:50.085035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.879 [2024-11-15 11:27:50.085045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.879 [2024-11-15 11:27:50.085055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.879 [2024-11-15 11:27:50.207774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.879 [2024-11-15 11:27:50.207848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.879 [2024-11-15 11:27:50.207870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.879 [2024-11-15 11:27:50.207881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.309386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.139 [2024-11-15 11:27:50.309442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.139 [2024-11-15 11:27:50.309457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.139 [2024-11-15 11:27:50.309469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.309572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.139 [2024-11-15 11:27:50.309585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.139 [2024-11-15 11:27:50.309597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.139 [2024-11-15 11:27:50.309613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.309663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.139 [2024-11-15 11:27:50.309674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.139 [2024-11-15 11:27:50.309685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.139 [2024-11-15 11:27:50.309694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.309820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.139 [2024-11-15 11:27:50.309833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.139 [2024-11-15 11:27:50.309844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.139 [2024-11-15 11:27:50.309853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.309893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.139 [2024-11-15 11:27:50.309914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:13.139 [2024-11-15 11:27:50.309925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.139 [2024-11-15 11:27:50.309934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.309972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.139 [2024-11-15 11:27:50.309982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.139 [2024-11-15 11:27:50.309993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.139 [2024-11-15 11:27:50.310004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.310047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.139 [2024-11-15 11:27:50.310059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.139 [2024-11-15 11:27:50.310069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.139 [2024-11-15 11:27:50.310079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.139 [2024-11-15 11:27:50.310208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 652.695 ms, result 0 00:24:14.076 00:24:14.076 00:24:14.076 11:27:51 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:15.974 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76289 00:24:15.974 11:27:53 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76289 ']' 00:24:15.974 11:27:53 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76289 00:24:15.974 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76289) - No such process 00:24:15.974 Process with pid 76289 is not found 00:24:15.974 11:27:53 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 76289 is not found' 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:15.974 Remove shared memory files 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:15.974 11:27:53 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:15.974 00:24:15.974 real 3m1.870s 00:24:15.974 user 2m49.361s 00:24:15.974 sys 0m14.145s 00:24:15.974 11:27:53 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:15.974 11:27:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:15.974 ************************************ 00:24:15.974 END TEST ftl_restore 00:24:15.974 ************************************ 00:24:15.974 11:27:53 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:15.974 11:27:53 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:15.974 11:27:53 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:15.974 11:27:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:16.233 ************************************ 00:24:16.233 START TEST ftl_dirty_shutdown 00:24:16.233 ************************************ 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:16.233 * Looking for test storage... 00:24:16.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.233 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:16.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.233 --rc genhtml_branch_coverage=1 00:24:16.234 --rc genhtml_function_coverage=1 00:24:16.234 --rc genhtml_legend=1 00:24:16.234 --rc geninfo_all_blocks=1 00:24:16.234 --rc geninfo_unexecuted_blocks=1 00:24:16.234 00:24:16.234 ' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.234 --rc genhtml_branch_coverage=1 00:24:16.234 --rc genhtml_function_coverage=1 00:24:16.234 --rc genhtml_legend=1 00:24:16.234 --rc geninfo_all_blocks=1 00:24:16.234 --rc geninfo_unexecuted_blocks=1 00:24:16.234 00:24:16.234 ' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.234 --rc genhtml_branch_coverage=1 00:24:16.234 --rc genhtml_function_coverage=1 00:24:16.234 --rc genhtml_legend=1 00:24:16.234 --rc geninfo_all_blocks=1 00:24:16.234 --rc geninfo_unexecuted_blocks=1 00:24:16.234 00:24:16.234 ' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.234 --rc genhtml_branch_coverage=1 00:24:16.234 --rc genhtml_function_coverage=1 00:24:16.234 --rc genhtml_legend=1 00:24:16.234 --rc geninfo_all_blocks=1 00:24:16.234 --rc geninfo_unexecuted_blocks=1 00:24:16.234 00:24:16.234 ' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78230 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78230 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78230 ']' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:16.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:16.234 11:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:16.491 [2024-11-15 11:27:53.742208] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:24:16.491 [2024-11-15 11:27:53.742385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78230 ] 00:24:16.749 [2024-11-15 11:27:53.925763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.749 [2024-11-15 11:27:54.041593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:17.682 11:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:17.940 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:18.198 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:18.198 { 00:24:18.198 "name": "nvme0n1", 00:24:18.198 "aliases": [ 00:24:18.198 "593756b4-b2e8-4a1b-907a-31f16e1335bb" 00:24:18.198 ], 00:24:18.198 "product_name": "NVMe disk", 00:24:18.198 "block_size": 4096, 00:24:18.198 "num_blocks": 1310720, 00:24:18.198 "uuid": "593756b4-b2e8-4a1b-907a-31f16e1335bb", 00:24:18.198 "numa_id": -1, 00:24:18.198 "assigned_rate_limits": { 00:24:18.198 "rw_ios_per_sec": 0, 00:24:18.198 "rw_mbytes_per_sec": 0, 00:24:18.198 "r_mbytes_per_sec": 0, 00:24:18.198 "w_mbytes_per_sec": 0 00:24:18.198 }, 00:24:18.199 "claimed": true, 00:24:18.199 "claim_type": "read_many_write_one", 00:24:18.199 "zoned": false, 00:24:18.199 "supported_io_types": { 00:24:18.199 "read": true, 00:24:18.199 "write": true, 00:24:18.199 "unmap": true, 00:24:18.199 "flush": true, 00:24:18.199 "reset": true, 00:24:18.199 "nvme_admin": true, 00:24:18.199 "nvme_io": true, 00:24:18.199 "nvme_io_md": false, 00:24:18.199 "write_zeroes": true, 00:24:18.199 "zcopy": false, 00:24:18.199 "get_zone_info": false, 00:24:18.199 "zone_management": false, 00:24:18.199 "zone_append": false, 00:24:18.199 "compare": true, 00:24:18.199 "compare_and_write": false, 00:24:18.199 "abort": true, 00:24:18.199 "seek_hole": false, 00:24:18.199 "seek_data": false, 00:24:18.199 "copy": true, 00:24:18.199 "nvme_iov_md": false 00:24:18.199 }, 00:24:18.199 "driver_specific": { 00:24:18.199 "nvme": [ 00:24:18.199 { 00:24:18.199 "pci_address": "0000:00:11.0", 00:24:18.199 "trid": { 00:24:18.199 "trtype": "PCIe", 00:24:18.199 "traddr": "0000:00:11.0" 00:24:18.199 }, 00:24:18.199 "ctrlr_data": { 00:24:18.199 "cntlid": 0, 00:24:18.199 "vendor_id": "0x1b36", 00:24:18.199 "model_number": "QEMU NVMe Ctrl", 00:24:18.199 "serial_number": "12341", 00:24:18.199 "firmware_revision": "8.0.0", 00:24:18.199 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:18.199 "oacs": { 00:24:18.199 "security": 0, 00:24:18.199 "format": 1, 00:24:18.199 "firmware": 0, 00:24:18.199 "ns_manage": 1 00:24:18.199 }, 00:24:18.199 "multi_ctrlr": false, 00:24:18.199 "ana_reporting": false 00:24:18.199 }, 00:24:18.199 "vs": { 00:24:18.199 "nvme_version": "1.4" 00:24:18.199 }, 00:24:18.199 "ns_data": { 00:24:18.199 "id": 1, 00:24:18.199 "can_share": false 00:24:18.199 } 00:24:18.199 } 00:24:18.199 ], 00:24:18.199 "mp_policy": "active_passive" 00:24:18.199 } 00:24:18.199 } 00:24:18.199 ]' 00:24:18.199 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:18.199 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:18.199 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=9b5fecf5-e325-4985-b657-685b121fce69 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:18.457 11:27:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b5fecf5-e325-4985-b657-685b121fce69 00:24:18.714 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:18.972 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=2c09184a-214f-4e53-b3d7-49ade7af5bc4 00:24:18.972 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2c09184a-214f-4e53-b3d7-49ade7af5bc4 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:19.230 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:19.488 { 00:24:19.488 "name": "ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51", 00:24:19.488 "aliases": [ 00:24:19.488 "lvs/nvme0n1p0" 00:24:19.488 ], 00:24:19.488 "product_name": "Logical Volume", 00:24:19.488 "block_size": 4096, 00:24:19.488 "num_blocks": 26476544, 00:24:19.488 "uuid": "ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51", 00:24:19.488 "assigned_rate_limits": { 00:24:19.488 "rw_ios_per_sec": 0, 00:24:19.488 "rw_mbytes_per_sec": 0, 00:24:19.488 "r_mbytes_per_sec": 0, 00:24:19.488 "w_mbytes_per_sec": 0 00:24:19.488 }, 00:24:19.488 "claimed": false, 00:24:19.488 "zoned": false, 00:24:19.488 "supported_io_types": { 00:24:19.488 "read": true, 00:24:19.488 "write": true, 00:24:19.488 "unmap": true, 00:24:19.488 "flush": false, 00:24:19.488 "reset": true, 00:24:19.488 "nvme_admin": false, 00:24:19.488 "nvme_io": false, 00:24:19.488 "nvme_io_md": false, 00:24:19.488 "write_zeroes": true, 00:24:19.488 "zcopy": false, 00:24:19.488 "get_zone_info": false, 00:24:19.488 "zone_management": false, 00:24:19.488 "zone_append": false, 00:24:19.488 "compare": false, 00:24:19.488 "compare_and_write": false, 00:24:19.488 "abort": false, 00:24:19.488 "seek_hole": true, 00:24:19.488 "seek_data": true, 00:24:19.488 "copy": false, 00:24:19.488 "nvme_iov_md": false 00:24:19.488 }, 00:24:19.488 "driver_specific": { 00:24:19.488 "lvol": { 00:24:19.488 "lvol_store_uuid": "2c09184a-214f-4e53-b3d7-49ade7af5bc4", 00:24:19.488 "base_bdev": "nvme0n1", 00:24:19.488 "thin_provision": true, 00:24:19.488 "num_allocated_clusters": 0, 00:24:19.488 "snapshot": false, 00:24:19.488 "clone": false, 00:24:19.488 "esnap_clone": false 00:24:19.488 } 00:24:19.488 } 00:24:19.488 } 00:24:19.488 ]' 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:19.488 11:27:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:19.754 11:27:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:19.754 11:27:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:19.754 11:27:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.754 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:19.754 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:19.754 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:19.754 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:20.015 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:20.015 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:20.015 { 00:24:20.015 "name": "ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51", 00:24:20.015 "aliases": [ 00:24:20.015 "lvs/nvme0n1p0" 00:24:20.015 ], 00:24:20.015 "product_name": "Logical Volume", 00:24:20.015 "block_size": 4096, 00:24:20.015 "num_blocks": 26476544, 00:24:20.015 "uuid": "ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51", 00:24:20.015 "assigned_rate_limits": { 00:24:20.015 "rw_ios_per_sec": 0, 00:24:20.015 "rw_mbytes_per_sec": 0, 00:24:20.015 "r_mbytes_per_sec": 0, 00:24:20.015 "w_mbytes_per_sec": 0 00:24:20.015 }, 00:24:20.015 "claimed": false, 00:24:20.015 "zoned": false, 00:24:20.015 "supported_io_types": { 00:24:20.015 "read": true, 00:24:20.015 "write": true, 00:24:20.015 "unmap": true, 00:24:20.015 "flush": false, 00:24:20.015 "reset": true, 00:24:20.015 "nvme_admin": false, 00:24:20.015 "nvme_io": false, 00:24:20.015 "nvme_io_md": false, 00:24:20.015 "write_zeroes": true, 00:24:20.015 "zcopy": false, 00:24:20.015 "get_zone_info": false, 00:24:20.015 "zone_management": false, 00:24:20.015 "zone_append": false, 00:24:20.015 "compare": false, 00:24:20.015 "compare_and_write": false, 00:24:20.015 "abort": false, 00:24:20.015 "seek_hole": true, 00:24:20.015 "seek_data": true, 00:24:20.015 "copy": false, 00:24:20.015 "nvme_iov_md": false 00:24:20.015 }, 00:24:20.015 "driver_specific": { 00:24:20.016 "lvol": { 00:24:20.016 "lvol_store_uuid": "2c09184a-214f-4e53-b3d7-49ade7af5bc4", 00:24:20.016 "base_bdev": "nvme0n1", 00:24:20.016 "thin_provision": true, 00:24:20.016 "num_allocated_clusters": 0, 00:24:20.016 "snapshot": false, 00:24:20.016 "clone": false, 00:24:20.016 "esnap_clone": false 00:24:20.016 } 00:24:20.016 } 00:24:20.016 } 00:24:20.016 ]' 00:24:20.016 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:20.016 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:20.016 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:20.275 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:20.275 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:20.275 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:20.275 11:27:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:20.275 11:27:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:20.275 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:20.534 { 00:24:20.534 "name": "ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51", 00:24:20.534 "aliases": [ 00:24:20.534 "lvs/nvme0n1p0" 00:24:20.534 ], 00:24:20.534 "product_name": "Logical Volume", 00:24:20.534 "block_size": 4096, 00:24:20.534 "num_blocks": 26476544, 00:24:20.534 "uuid": "ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51", 00:24:20.534 "assigned_rate_limits": { 00:24:20.534 "rw_ios_per_sec": 0, 00:24:20.534 "rw_mbytes_per_sec": 0, 00:24:20.534 "r_mbytes_per_sec": 0, 00:24:20.534 "w_mbytes_per_sec": 0 00:24:20.534 }, 00:24:20.534 "claimed": false, 00:24:20.534 "zoned": false, 00:24:20.534 "supported_io_types": { 00:24:20.534 "read": true, 00:24:20.534 "write": true, 00:24:20.534 "unmap": true, 00:24:20.534 "flush": false, 00:24:20.534 "reset": true, 00:24:20.534 "nvme_admin": false, 00:24:20.534 "nvme_io": false, 00:24:20.534 "nvme_io_md": false, 00:24:20.534 "write_zeroes": true, 00:24:20.534 "zcopy": false, 00:24:20.534 "get_zone_info": false, 00:24:20.534 "zone_management": false, 00:24:20.534 "zone_append": false, 00:24:20.534 "compare": false, 00:24:20.534 "compare_and_write": false, 00:24:20.534 "abort": false, 00:24:20.534 "seek_hole": true, 00:24:20.534 "seek_data": true, 00:24:20.534 "copy": false, 00:24:20.534 "nvme_iov_md": false 00:24:20.534 }, 00:24:20.534 "driver_specific": { 00:24:20.534 "lvol": { 00:24:20.534 "lvol_store_uuid": "2c09184a-214f-4e53-b3d7-49ade7af5bc4", 00:24:20.534 "base_bdev": "nvme0n1", 00:24:20.534 "thin_provision": true, 00:24:20.534 "num_allocated_clusters": 0, 00:24:20.534 "snapshot": false, 00:24:20.534 "clone": false, 00:24:20.534 "esnap_clone": false 00:24:20.534 } 00:24:20.534 } 00:24:20.534 } 00:24:20.534 ]' 00:24:20.534 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 --l2p_dram_limit 10' 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:20.793 11:27:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ad3e5ecd-dbd2-42e1-8a2f-b6102e425a51 --l2p_dram_limit 10 -c nvc0n1p0 00:24:20.793 [2024-11-15 11:27:58.186092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.793 [2024-11-15 11:27:58.186202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:20.793 [2024-11-15 11:27:58.186231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:20.793 [2024-11-15 11:27:58.186246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.793 [2024-11-15 11:27:58.186366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.793 [2024-11-15 11:27:58.186382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:20.793 [2024-11-15 11:27:58.186400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:20.793 [2024-11-15 11:27:58.186413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.793 [2024-11-15 11:27:58.186444] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:20.793 [2024-11-15 11:27:58.187634] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:20.793 [2024-11-15 11:27:58.187681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.793 [2024-11-15 11:27:58.187696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:20.793 [2024-11-15 11:27:58.187715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:24:20.793 [2024-11-15 11:27:58.187728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.793 [2024-11-15 11:27:58.187840] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a370d1e8-6e29-4106-877b-439f66871000 00:24:20.793 [2024-11-15 11:27:58.190442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.793 [2024-11-15 11:27:58.190495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:20.793 [2024-11-15 11:27:58.190512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:20.794 [2024-11-15 11:27:58.190532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.052 [2024-11-15 11:27:58.204978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.052 [2024-11-15 11:27:58.205048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:21.052 [2024-11-15 11:27:58.205068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.372 ms 00:24:21.052 [2024-11-15 11:27:58.205084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.052 [2024-11-15 11:27:58.205242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.052 [2024-11-15 11:27:58.205263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:21.052 [2024-11-15 11:27:58.205277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:24:21.052 [2024-11-15 11:27:58.205299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.052 [2024-11-15 11:27:58.205403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.052 [2024-11-15 11:27:58.205422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:21.052 [2024-11-15 11:27:58.205435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:21.052 [2024-11-15 11:27:58.205457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.052 [2024-11-15 11:27:58.205496] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:21.052 [2024-11-15 11:27:58.212223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.052 [2024-11-15 11:27:58.212279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:21.052 [2024-11-15 11:27:58.212298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.747 ms 00:24:21.052 [2024-11-15 11:27:58.212312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.052 [2024-11-15 11:27:58.212399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.052 [2024-11-15 11:27:58.212422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:21.052 [2024-11-15 11:27:58.212440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:21.052 [2024-11-15 11:27:58.212454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.052 [2024-11-15 11:27:58.212520] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:21.052 [2024-11-15 11:27:58.212720] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:21.052 [2024-11-15 11:27:58.212750] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:21.052 [2024-11-15 11:27:58.212769] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:21.052 [2024-11-15 11:27:58.212790] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:21.052 [2024-11-15 11:27:58.212805] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:21.052 [2024-11-15 11:27:58.212822] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:21.052 [2024-11-15 11:27:58.212836] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:21.052 [2024-11-15 11:27:58.212856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:21.053 [2024-11-15 11:27:58.212869] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:21.053 [2024-11-15 11:27:58.212887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.053 [2024-11-15 11:27:58.212899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:21.053 [2024-11-15 11:27:58.212916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:24:21.053 [2024-11-15 11:27:58.212947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.053 [2024-11-15 11:27:58.213034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.053 [2024-11-15 11:27:58.213047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:21.053 [2024-11-15 11:27:58.213064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:21.053 [2024-11-15 11:27:58.213077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.053 [2024-11-15 11:27:58.213205] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:21.053 [2024-11-15 11:27:58.213223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:21.053 [2024-11-15 11:27:58.213240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:21.053 [2024-11-15 11:27:58.213254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.053 [2024-11-15 11:27:58.213272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:21.053 [2024-11-15 11:27:58.213283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:21.053 [2024-11-15 11:27:58.213298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:21.053 [2024-11-15 11:27:58.213310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:21.053 [2024-11-15 11:27:58.213326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:21.053 [2024-11-15 11:27:58.213337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:21.053 [2024-11-15 11:27:58.213352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:21.053 [2024-11-15 11:27:58.213365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:21.053 [2024-11-15 11:27:58.213381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:21.053 [2024-11-15 11:27:58.213393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:21.053 [2024-11-15 11:27:58.213409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:21.053 [2024-11-15 11:27:58.213421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.053 [2024-11-15 11:27:58.213441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:21.053 [2024-11-15 11:27:58.213453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:21.053 [2024-11-15 11:27:58.213469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.053 [2024-11-15 11:27:58.213481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:21.053 [2024-11-15 11:27:58.213497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:21.053 [2024-11-15 11:27:58.213508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.053 [2024-11-15 11:27:58.213523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:21.053 [2024-11-15 11:27:58.213535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:21.053 [2024-11-15 11:27:58.213549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.053 [2024-11-15 11:27:58.213920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:21.053 [2024-11-15 11:27:58.213973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:21.053 [2024-11-15 11:27:58.214010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.053 [2024-11-15 11:27:58.214050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:21.053 [2024-11-15 11:27:58.214087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:21.053 [2024-11-15 11:27:58.214126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:21.053 [2024-11-15 11:27:58.214249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:21.053 [2024-11-15 11:27:58.214301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:21.053 [2024-11-15 11:27:58.214337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:21.053 [2024-11-15 11:27:58.214376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:21.053 [2024-11-15 11:27:58.214411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:21.053 [2024-11-15 11:27:58.214450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:21.053 [2024-11-15 11:27:58.214583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:21.053 [2024-11-15 11:27:58.214637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:21.053 [2024-11-15 11:27:58.214673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.053 [2024-11-15 11:27:58.214713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:21.053 [2024-11-15 11:27:58.214889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:21.053 [2024-11-15 11:27:58.214929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.053 [2024-11-15 11:27:58.215023] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:21.053 [2024-11-15 11:27:58.215074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:21.053 [2024-11-15 11:27:58.215111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:21.053 [2024-11-15 11:27:58.215152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:21.053 [2024-11-15 11:27:58.215251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:21.053 [2024-11-15 11:27:58.215302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:21.053 [2024-11-15 11:27:58.215338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:21.053 [2024-11-15 11:27:58.215378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:21.053 [2024-11-15 11:27:58.215413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:21.053 [2024-11-15 11:27:58.215527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:21.053 [2024-11-15 11:27:58.215550] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:21.053 [2024-11-15 11:27:58.215590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:21.053 [2024-11-15 11:27:58.215610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:21.053 [2024-11-15 11:27:58.215627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:21.053 [2024-11-15 11:27:58.215641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:21.053 [2024-11-15 11:27:58.215657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:21.053 [2024-11-15 11:27:58.215670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:21.053 [2024-11-15 11:27:58.215686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:21.053 [2024-11-15 11:27:58.215699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:21.053 [2024-11-15 11:27:58.215716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:21.053 [2024-11-15 11:27:58.215729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:21.053 [2024-11-15 11:27:58.215748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:21.053 [2024-11-15 11:27:58.215760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:21.053 [2024-11-15 11:27:58.215778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:21.053 [2024-11-15 11:27:58.215791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:21.053 [2024-11-15 11:27:58.215809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:21.053 [2024-11-15 11:27:58.215822] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:21.053 [2024-11-15 11:27:58.215840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:21.053 [2024-11-15 11:27:58.215854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:21.053 [2024-11-15 11:27:58.215871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:21.053 [2024-11-15 11:27:58.215883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:21.053 [2024-11-15 11:27:58.215900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:21.053 [2024-11-15 11:27:58.215915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.053 [2024-11-15 11:27:58.215933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:21.053 [2024-11-15 11:27:58.215946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.788 ms 00:24:21.053 [2024-11-15 11:27:58.215962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.053 [2024-11-15 11:27:58.216066] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:21.053 [2024-11-15 11:27:58.216092] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:24.340 [2024-11-15 11:28:01.458746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.459129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:24.340 [2024-11-15 11:28:01.459164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3247.940 ms 00:24:24.340 [2024-11-15 11:28:01.459183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.508713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.508799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.340 [2024-11-15 11:28:01.508822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.296 ms 00:24:24.340 [2024-11-15 11:28:01.508840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.509073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.509094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:24.340 [2024-11-15 11:28:01.509110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:24.340 [2024-11-15 11:28:01.509137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.566467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.566573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.340 [2024-11-15 11:28:01.566594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.363 ms 00:24:24.340 [2024-11-15 11:28:01.566612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.566701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.566728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.340 [2024-11-15 11:28:01.566743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.340 [2024-11-15 11:28:01.566759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.567695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.567738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.340 [2024-11-15 11:28:01.567753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:24:24.340 [2024-11-15 11:28:01.567770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.567927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.567949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.340 [2024-11-15 11:28:01.567967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:24:24.340 [2024-11-15 11:28:01.567988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.594622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.594696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.340 [2024-11-15 11:28:01.594715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.642 ms 00:24:24.340 [2024-11-15 11:28:01.594733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.620738] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:24.340 [2024-11-15 11:28:01.625872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.626144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:24.340 [2024-11-15 11:28:01.626190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.968 ms 00:24:24.340 [2024-11-15 11:28:01.626206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.721900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.722007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:24.340 [2024-11-15 11:28:01.722035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.754 ms 00:24:24.340 [2024-11-15 11:28:01.722050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.340 [2024-11-15 11:28:01.722355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.340 [2024-11-15 11:28:01.722385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:24.340 [2024-11-15 11:28:01.722412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:24:24.340 [2024-11-15 11:28:01.722435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.600 [2024-11-15 11:28:01.761678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.600 [2024-11-15 11:28:01.761787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:24.600 [2024-11-15 11:28:01.761814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.195 ms 00:24:24.600 [2024-11-15 11:28:01.761827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.600 [2024-11-15 11:28:01.799274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.600 [2024-11-15 11:28:01.799354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:24.600 [2024-11-15 11:28:01.799382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.423 ms 00:24:24.600 [2024-11-15 11:28:01.799396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.600 [2024-11-15 11:28:01.800260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.600 [2024-11-15 11:28:01.800299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:24.600 [2024-11-15 11:28:01.800319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:24:24.600 [2024-11-15 11:28:01.800338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.600 [2024-11-15 11:28:01.911350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.600 [2024-11-15 11:28:01.911691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:24.600 [2024-11-15 11:28:01.911736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.091 ms 00:24:24.600 [2024-11-15 11:28:01.911752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.600 [2024-11-15 11:28:01.951669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.600 [2024-11-15 11:28:01.951749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:24.600 [2024-11-15 11:28:01.951775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.835 ms 00:24:24.600 [2024-11-15 11:28:01.951789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.600 [2024-11-15 11:28:01.991620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.600 [2024-11-15 11:28:01.991695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:24.600 [2024-11-15 11:28:01.991720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.822 ms 00:24:24.600 [2024-11-15 11:28:01.991735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.859 [2024-11-15 11:28:02.029007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.859 [2024-11-15 11:28:02.029296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:24.859 [2024-11-15 11:28:02.029334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.261 ms 00:24:24.859 [2024-11-15 11:28:02.029348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.859 [2024-11-15 11:28:02.029412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.859 [2024-11-15 11:28:02.029429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:24.859 [2024-11-15 11:28:02.029451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:24.859 [2024-11-15 11:28:02.029464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.859 [2024-11-15 11:28:02.029622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.859 [2024-11-15 11:28:02.029640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:24.859 [2024-11-15 11:28:02.029663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:24.859 [2024-11-15 11:28:02.029676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.859 [2024-11-15 11:28:02.031295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3850.771 ms, result 0 00:24:24.859 { 00:24:24.859 "name": "ftl0", 00:24:24.859 "uuid": "a370d1e8-6e29-4106-877b-439f66871000" 00:24:24.859 } 00:24:24.859 11:28:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:24.860 11:28:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:25.118 11:28:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:25.118 11:28:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:25.118 11:28:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:25.118 /dev/nbd0 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:25.377 1+0 records in 00:24:25.377 1+0 records out 00:24:25.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023741 s, 17.3 MB/s 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:24:25.377 11:28:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:25.377 [2024-11-15 11:28:02.636117] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:24:25.377 [2024-11-15 11:28:02.636249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78383 ] 00:24:25.636 [2024-11-15 11:28:02.816656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.636 [2024-11-15 11:28:02.928021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.013  [2024-11-15T11:28:05.356Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-15T11:28:06.343Z] Copying: 397/1024 [MB] (197 MBps) [2024-11-15T11:28:07.282Z] Copying: 595/1024 [MB] (198 MBps) [2024-11-15T11:28:08.661Z] Copying: 792/1024 [MB] (197 MBps) [2024-11-15T11:28:08.661Z] Copying: 983/1024 [MB] (190 MBps) [2024-11-15T11:28:10.038Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:24:32.637 00:24:32.637 11:28:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:34.014 11:28:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:34.273 [2024-11-15 11:28:11.416288] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:24:34.273 [2024-11-15 11:28:11.416418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78477 ] 00:24:34.273 [2024-11-15 11:28:11.597392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.532 [2024-11-15 11:28:11.712791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.908  [2024-11-15T11:28:14.244Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-15T11:28:15.178Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-15T11:28:16.113Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-15T11:28:17.048Z] Copying: 66/1024 [MB] (16 MBps) [2024-11-15T11:28:18.441Z] Copying: 83/1024 [MB] (16 MBps) [2024-11-15T11:28:19.377Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-15T11:28:20.313Z] Copying: 117/1024 [MB] (16 MBps) [2024-11-15T11:28:21.249Z] Copying: 134/1024 [MB] (16 MBps) [2024-11-15T11:28:22.186Z] Copying: 150/1024 [MB] (16 MBps) [2024-11-15T11:28:23.122Z] Copying: 167/1024 [MB] (16 MBps) [2024-11-15T11:28:24.058Z] Copying: 183/1024 [MB] (16 MBps) [2024-11-15T11:28:25.435Z] Copying: 199/1024 [MB] (16 MBps) [2024-11-15T11:28:26.375Z] Copying: 215/1024 [MB] (16 MBps) [2024-11-15T11:28:27.314Z] Copying: 230/1024 [MB] (14 MBps) [2024-11-15T11:28:28.252Z] Copying: 246/1024 [MB] (16 MBps) [2024-11-15T11:28:29.189Z] Copying: 262/1024 [MB] (16 MBps) [2024-11-15T11:28:30.126Z] Copying: 279/1024 [MB] (16 MBps) [2024-11-15T11:28:31.063Z] Copying: 294/1024 [MB] (15 MBps) [2024-11-15T11:28:32.443Z] Copying: 311/1024 [MB] (16 MBps) [2024-11-15T11:28:33.012Z] Copying: 328/1024 [MB] (17 MBps) [2024-11-15T11:28:34.390Z] Copying: 344/1024 [MB] (16 MBps) [2024-11-15T11:28:35.325Z] Copying: 360/1024 [MB] (15 MBps) [2024-11-15T11:28:36.262Z] Copying: 375/1024 [MB] (15 MBps) [2024-11-15T11:28:37.199Z] Copying: 390/1024 [MB] (15 MBps) [2024-11-15T11:28:38.136Z] Copying: 406/1024 [MB] (15 MBps) [2024-11-15T11:28:39.073Z] Copying: 422/1024 [MB] (16 MBps) [2024-11-15T11:28:40.009Z] Copying: 438/1024 [MB] (15 MBps) [2024-11-15T11:28:41.386Z] Copying: 454/1024 [MB] (16 MBps) [2024-11-15T11:28:42.356Z] Copying: 471/1024 [MB] (16 MBps) [2024-11-15T11:28:43.293Z] Copying: 488/1024 [MB] (16 MBps) [2024-11-15T11:28:44.229Z] Copying: 505/1024 [MB] (16 MBps) [2024-11-15T11:28:45.165Z] Copying: 521/1024 [MB] (16 MBps) [2024-11-15T11:28:46.103Z] Copying: 538/1024 [MB] (16 MBps) [2024-11-15T11:28:47.039Z] Copying: 554/1024 [MB] (16 MBps) [2024-11-15T11:28:48.417Z] Copying: 571/1024 [MB] (16 MBps) [2024-11-15T11:28:48.986Z] Copying: 587/1024 [MB] (16 MBps) [2024-11-15T11:28:50.370Z] Copying: 604/1024 [MB] (16 MBps) [2024-11-15T11:28:51.306Z] Copying: 621/1024 [MB] (16 MBps) [2024-11-15T11:28:52.241Z] Copying: 637/1024 [MB] (16 MBps) [2024-11-15T11:28:53.177Z] Copying: 654/1024 [MB] (17 MBps) [2024-11-15T11:28:54.112Z] Copying: 672/1024 [MB] (17 MBps) [2024-11-15T11:28:55.049Z] Copying: 689/1024 [MB] (16 MBps) [2024-11-15T11:28:55.986Z] Copying: 706/1024 [MB] (17 MBps) [2024-11-15T11:28:57.365Z] Copying: 724/1024 [MB] (17 MBps) [2024-11-15T11:28:58.301Z] Copying: 742/1024 [MB] (17 MBps) [2024-11-15T11:28:59.239Z] Copying: 760/1024 [MB] (17 MBps) [2024-11-15T11:29:00.175Z] Copying: 777/1024 [MB] (17 MBps) [2024-11-15T11:29:01.112Z] Copying: 795/1024 [MB] (18 MBps) [2024-11-15T11:29:02.047Z] Copying: 813/1024 [MB] (17 MBps) [2024-11-15T11:29:02.984Z] Copying: 830/1024 [MB] (17 MBps) [2024-11-15T11:29:04.361Z] Copying: 848/1024 [MB] (17 MBps) [2024-11-15T11:29:05.298Z] Copying: 865/1024 [MB] (17 MBps) [2024-11-15T11:29:06.258Z] Copying: 882/1024 [MB] (16 MBps) [2024-11-15T11:29:07.195Z] Copying: 898/1024 [MB] (16 MBps) [2024-11-15T11:29:08.131Z] Copying: 915/1024 [MB] (17 MBps) [2024-11-15T11:29:09.067Z] Copying: 932/1024 [MB] (16 MBps) [2024-11-15T11:29:10.005Z] Copying: 948/1024 [MB] (16 MBps) [2024-11-15T11:29:11.380Z] Copying: 964/1024 [MB] (16 MBps) [2024-11-15T11:29:11.945Z] Copying: 981/1024 [MB] (16 MBps) [2024-11-15T11:29:13.322Z] Copying: 997/1024 [MB] (16 MBps) [2024-11-15T11:29:13.891Z] Copying: 1013/1024 [MB] (15 MBps) [2024-11-15T11:29:14.827Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:25:37.426 00:25:37.685 11:29:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:37.685 11:29:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:37.685 11:29:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:37.943 [2024-11-15 11:29:15.241099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.943 [2024-11-15 11:29:15.241161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:37.943 [2024-11-15 11:29:15.241177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:37.943 [2024-11-15 11:29:15.241191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.944 [2024-11-15 11:29:15.241225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:37.944 [2024-11-15 11:29:15.245352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.944 [2024-11-15 11:29:15.245389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:37.944 [2024-11-15 11:29:15.245405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:25:37.944 [2024-11-15 11:29:15.245416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.944 [2024-11-15 11:29:15.247537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.944 [2024-11-15 11:29:15.247586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:37.944 [2024-11-15 11:29:15.247603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:25:37.944 [2024-11-15 11:29:15.247623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.944 [2024-11-15 11:29:15.265496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.944 [2024-11-15 11:29:15.265543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:37.944 [2024-11-15 11:29:15.265572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.874 ms 00:25:37.944 [2024-11-15 11:29:15.265584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.944 [2024-11-15 11:29:15.270681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.944 [2024-11-15 11:29:15.270833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:37.944 [2024-11-15 11:29:15.270860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:25:37.944 [2024-11-15 11:29:15.270871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.944 [2024-11-15 11:29:15.308389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.944 [2024-11-15 11:29:15.308430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:37.944 [2024-11-15 11:29:15.308447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.483 ms 00:25:37.944 [2024-11-15 11:29:15.308458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.944 [2024-11-15 11:29:15.331582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.944 [2024-11-15 11:29:15.331624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:37.944 [2024-11-15 11:29:15.331642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.107 ms 00:25:37.944 [2024-11-15 11:29:15.331665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.944 [2024-11-15 11:29:15.331843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.944 [2024-11-15 11:29:15.331860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:37.944 [2024-11-15 11:29:15.331889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:25:37.944 [2024-11-15 11:29:15.331899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.204 [2024-11-15 11:29:15.368346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.204 [2024-11-15 11:29:15.368495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:38.204 [2024-11-15 11:29:15.368539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.482 ms 00:25:38.204 [2024-11-15 11:29:15.368549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.204 [2024-11-15 11:29:15.406963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.204 [2024-11-15 11:29:15.407035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:38.204 [2024-11-15 11:29:15.407056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.407 ms 00:25:38.204 [2024-11-15 11:29:15.407067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.204 [2024-11-15 11:29:15.446146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.204 [2024-11-15 11:29:15.446257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:38.204 [2024-11-15 11:29:15.446277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.044 ms 00:25:38.204 [2024-11-15 11:29:15.446287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.204 [2024-11-15 11:29:15.484403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.204 [2024-11-15 11:29:15.484474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:38.204 [2024-11-15 11:29:15.484493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.985 ms 00:25:38.204 [2024-11-15 11:29:15.484520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.204 [2024-11-15 11:29:15.484618] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:38.204 [2024-11-15 11:29:15.484639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.484989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:38.204 [2024-11-15 11:29:15.485302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:38.205 [2024-11-15 11:29:15.485919] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:38.205 [2024-11-15 11:29:15.485932] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a370d1e8-6e29-4106-877b-439f66871000 00:25:38.205 [2024-11-15 11:29:15.485944] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:38.205 [2024-11-15 11:29:15.485959] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:38.205 [2024-11-15 11:29:15.485969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:38.205 [2024-11-15 11:29:15.485986] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:38.205 [2024-11-15 11:29:15.485996] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:38.205 [2024-11-15 11:29:15.486009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:38.205 [2024-11-15 11:29:15.486019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:38.205 [2024-11-15 11:29:15.486030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:38.205 [2024-11-15 11:29:15.486039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:38.205 [2024-11-15 11:29:15.486052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.205 [2024-11-15 11:29:15.486063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:38.205 [2024-11-15 11:29:15.486077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:25:38.205 [2024-11-15 11:29:15.486086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.205 [2024-11-15 11:29:15.506466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.205 [2024-11-15 11:29:15.506784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:38.205 [2024-11-15 11:29:15.506817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.319 ms 00:25:38.205 [2024-11-15 11:29:15.506828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.205 [2024-11-15 11:29:15.507437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.205 [2024-11-15 11:29:15.507456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:38.205 [2024-11-15 11:29:15.507470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:25:38.205 [2024-11-15 11:29:15.507481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.205 [2024-11-15 11:29:15.572982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.205 [2024-11-15 11:29:15.573046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.205 [2024-11-15 11:29:15.573064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.205 [2024-11-15 11:29:15.573075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.205 [2024-11-15 11:29:15.573160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.205 [2024-11-15 11:29:15.573172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.205 [2024-11-15 11:29:15.573185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.205 [2024-11-15 11:29:15.573195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.205 [2024-11-15 11:29:15.573308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.205 [2024-11-15 11:29:15.573327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.205 [2024-11-15 11:29:15.573341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.205 [2024-11-15 11:29:15.573351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.205 [2024-11-15 11:29:15.573378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.205 [2024-11-15 11:29:15.573389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.205 [2024-11-15 11:29:15.573402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.205 [2024-11-15 11:29:15.573412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.697387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.697456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.465 [2024-11-15 11:29:15.697476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.697486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.797384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.797640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.465 [2024-11-15 11:29:15.797671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.797683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.797809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.797823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.465 [2024-11-15 11:29:15.797837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.797851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.797913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.797926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.465 [2024-11-15 11:29:15.797939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.797950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.798079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.798094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.465 [2024-11-15 11:29:15.798107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.798121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.798164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.798190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:38.465 [2024-11-15 11:29:15.798204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.798214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.798259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.798272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.465 [2024-11-15 11:29:15.798285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.798295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.798347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.465 [2024-11-15 11:29:15.798360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.465 [2024-11-15 11:29:15.798374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.465 [2024-11-15 11:29:15.798384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.465 [2024-11-15 11:29:15.798520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 558.293 ms, result 0 00:25:38.465 true 00:25:38.465 11:29:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78230 00:25:38.465 11:29:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78230 00:25:38.465 11:29:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:38.724 [2024-11-15 11:29:15.923548] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:25:38.724 [2024-11-15 11:29:15.923710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79128 ] 00:25:38.724 [2024-11-15 11:29:16.104691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.982 [2024-11-15 11:29:16.219858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.356  [2024-11-15T11:29:18.692Z] Copying: 201/1024 [MB] (201 MBps) [2024-11-15T11:29:19.627Z] Copying: 407/1024 [MB] (205 MBps) [2024-11-15T11:29:20.562Z] Copying: 616/1024 [MB] (209 MBps) [2024-11-15T11:29:21.940Z] Copying: 817/1024 [MB] (201 MBps) [2024-11-15T11:29:21.940Z] Copying: 1021/1024 [MB] (204 MBps) [2024-11-15T11:29:22.905Z] Copying: 1024/1024 [MB] (average 204 MBps) 00:25:45.504 00:25:45.504 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78230 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:45.504 11:29:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:45.504 [2024-11-15 11:29:22.787820] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:25:45.504 [2024-11-15 11:29:22.788119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79207 ] 00:25:45.764 [2024-11-15 11:29:22.969624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.764 [2024-11-15 11:29:23.087058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.332 [2024-11-15 11:29:23.451039] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.332 [2024-11-15 11:29:23.451109] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.332 [2024-11-15 11:29:23.517330] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:46.332 [2024-11-15 11:29:23.517758] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:46.332 [2024-11-15 11:29:23.518005] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:46.593 [2024-11-15 11:29:23.826290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.826342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:46.593 [2024-11-15 11:29:23.826359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.593 [2024-11-15 11:29:23.826370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.826424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.826437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.593 [2024-11-15 11:29:23.826448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:46.593 [2024-11-15 11:29:23.826458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.826480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:46.593 [2024-11-15 11:29:23.827428] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:46.593 [2024-11-15 11:29:23.827605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.827622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.593 [2024-11-15 11:29:23.827635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:25:46.593 [2024-11-15 11:29:23.827646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.829142] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:46.593 [2024-11-15 11:29:23.848933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.848979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:46.593 [2024-11-15 11:29:23.848994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.823 ms 00:25:46.593 [2024-11-15 11:29:23.849005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.849067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.849080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:46.593 [2024-11-15 11:29:23.849091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:46.593 [2024-11-15 11:29:23.849102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.855925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.855953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.593 [2024-11-15 11:29:23.855965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.760 ms 00:25:46.593 [2024-11-15 11:29:23.855976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.856052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.856066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.593 [2024-11-15 11:29:23.856078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:46.593 [2024-11-15 11:29:23.856088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.856132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.856145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:46.593 [2024-11-15 11:29:23.856156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:46.593 [2024-11-15 11:29:23.856166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.856191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:46.593 [2024-11-15 11:29:23.861116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.861149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.593 [2024-11-15 11:29:23.861162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.940 ms 00:25:46.593 [2024-11-15 11:29:23.861172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.861203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.861215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:46.593 [2024-11-15 11:29:23.861225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:46.593 [2024-11-15 11:29:23.861236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.861293] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:46.593 [2024-11-15 11:29:23.861317] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:46.593 [2024-11-15 11:29:23.861354] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:46.593 [2024-11-15 11:29:23.861372] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:46.593 [2024-11-15 11:29:23.861463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:46.593 [2024-11-15 11:29:23.861478] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:46.593 [2024-11-15 11:29:23.861491] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:46.593 [2024-11-15 11:29:23.861505] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:46.593 [2024-11-15 11:29:23.861521] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:46.593 [2024-11-15 11:29:23.861533] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:46.593 [2024-11-15 11:29:23.861544] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:46.593 [2024-11-15 11:29:23.861575] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:46.593 [2024-11-15 11:29:23.861587] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:46.593 [2024-11-15 11:29:23.861598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.861610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:46.593 [2024-11-15 11:29:23.861623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:46.593 [2024-11-15 11:29:23.861634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.861706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.593 [2024-11-15 11:29:23.861722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:46.593 [2024-11-15 11:29:23.861733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:46.593 [2024-11-15 11:29:23.861744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.593 [2024-11-15 11:29:23.861840] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:46.593 [2024-11-15 11:29:23.861856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:46.593 [2024-11-15 11:29:23.861867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.593 [2024-11-15 11:29:23.861878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.593 [2024-11-15 11:29:23.861889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:46.593 [2024-11-15 11:29:23.861900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:46.593 [2024-11-15 11:29:23.861911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:46.593 [2024-11-15 11:29:23.861920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:46.593 [2024-11-15 11:29:23.861930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:46.593 [2024-11-15 11:29:23.861941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.593 [2024-11-15 11:29:23.861950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:46.593 [2024-11-15 11:29:23.861971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:46.593 [2024-11-15 11:29:23.861980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.593 [2024-11-15 11:29:23.861989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:46.593 [2024-11-15 11:29:23.861999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:46.593 [2024-11-15 11:29:23.862009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.593 [2024-11-15 11:29:23.862018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:46.593 [2024-11-15 11:29:23.862028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:46.593 [2024-11-15 11:29:23.862037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.593 [2024-11-15 11:29:23.862047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:46.593 [2024-11-15 11:29:23.862056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:46.593 [2024-11-15 11:29:23.862065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.593 [2024-11-15 11:29:23.862074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:46.593 [2024-11-15 11:29:23.862083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:46.593 [2024-11-15 11:29:23.862092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.593 [2024-11-15 11:29:23.862102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:46.593 [2024-11-15 11:29:23.862111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:46.593 [2024-11-15 11:29:23.862119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.593 [2024-11-15 11:29:23.862129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:46.593 [2024-11-15 11:29:23.862139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:46.593 [2024-11-15 11:29:23.862147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.593 [2024-11-15 11:29:23.862157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:46.593 [2024-11-15 11:29:23.862166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:46.593 [2024-11-15 11:29:23.862185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.593 [2024-11-15 11:29:23.862194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:46.593 [2024-11-15 11:29:23.862203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:46.593 [2024-11-15 11:29:23.862212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.593 [2024-11-15 11:29:23.862222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:46.593 [2024-11-15 11:29:23.862231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:46.594 [2024-11-15 11:29:23.862240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.594 [2024-11-15 11:29:23.862250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:46.594 [2024-11-15 11:29:23.862260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:46.594 [2024-11-15 11:29:23.862270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.594 [2024-11-15 11:29:23.862279] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:46.594 [2024-11-15 11:29:23.862289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:46.594 [2024-11-15 11:29:23.862298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.594 [2024-11-15 11:29:23.862312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.594 [2024-11-15 11:29:23.862322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:46.594 [2024-11-15 11:29:23.862332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:46.594 [2024-11-15 11:29:23.862341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:46.594 [2024-11-15 11:29:23.862351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:46.594 [2024-11-15 11:29:23.862361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:46.594 [2024-11-15 11:29:23.862371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:46.594 [2024-11-15 11:29:23.862381] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:46.594 [2024-11-15 11:29:23.862394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.594 [2024-11-15 11:29:23.862405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:46.594 [2024-11-15 11:29:23.862416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:46.594 [2024-11-15 11:29:23.862426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:46.594 [2024-11-15 11:29:23.862436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:46.594 [2024-11-15 11:29:23.862447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:46.594 [2024-11-15 11:29:23.862457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:46.594 [2024-11-15 11:29:23.862468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:46.594 [2024-11-15 11:29:23.862479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:46.594 [2024-11-15 11:29:23.862489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:46.594 [2024-11-15 11:29:23.862499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:46.594 [2024-11-15 11:29:23.862509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:46.594 [2024-11-15 11:29:23.862519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:46.594 [2024-11-15 11:29:23.862530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:46.594 [2024-11-15 11:29:23.862540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:46.594 [2024-11-15 11:29:23.862550] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:46.594 [2024-11-15 11:29:23.862573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.594 [2024-11-15 11:29:23.862585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:46.594 [2024-11-15 11:29:23.862596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:46.594 [2024-11-15 11:29:23.862607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:46.594 [2024-11-15 11:29:23.862618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:46.594 [2024-11-15 11:29:23.862629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.862640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:46.594 [2024-11-15 11:29:23.862650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:25:46.594 [2024-11-15 11:29:23.862660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.594 [2024-11-15 11:29:23.902416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.902463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.594 [2024-11-15 11:29:23.902479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.772 ms 00:25:46.594 [2024-11-15 11:29:23.902489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.594 [2024-11-15 11:29:23.902597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.902615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:46.594 [2024-11-15 11:29:23.902642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:46.594 [2024-11-15 11:29:23.902654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.594 [2024-11-15 11:29:23.967518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.967740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.594 [2024-11-15 11:29:23.967771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.895 ms 00:25:46.594 [2024-11-15 11:29:23.967782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.594 [2024-11-15 11:29:23.967835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.967846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.594 [2024-11-15 11:29:23.967858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:46.594 [2024-11-15 11:29:23.967868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.594 [2024-11-15 11:29:23.968380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.968395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.594 [2024-11-15 11:29:23.968407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:25:46.594 [2024-11-15 11:29:23.968417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.594 [2024-11-15 11:29:23.968546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.968574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.594 [2024-11-15 11:29:23.968585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:25:46.594 [2024-11-15 11:29:23.968595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.594 [2024-11-15 11:29:23.988037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.594 [2024-11-15 11:29:23.988074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.594 [2024-11-15 11:29:23.988089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.450 ms 00:25:46.594 [2024-11-15 11:29:23.988100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.007222] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:46.854 [2024-11-15 11:29:24.007379] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:46.854 [2024-11-15 11:29:24.007400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.007412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:46.854 [2024-11-15 11:29:24.007424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.212 ms 00:25:46.854 [2024-11-15 11:29:24.007435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.037895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.038049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:46.854 [2024-11-15 11:29:24.038084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.412 ms 00:25:46.854 [2024-11-15 11:29:24.038096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.056417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.056460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:46.854 [2024-11-15 11:29:24.056475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.303 ms 00:25:46.854 [2024-11-15 11:29:24.056486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.074602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.074637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:46.854 [2024-11-15 11:29:24.074650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.102 ms 00:25:46.854 [2024-11-15 11:29:24.074661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.075460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.075489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:46.854 [2024-11-15 11:29:24.075501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:25:46.854 [2024-11-15 11:29:24.075512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.163575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.163814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:46.854 [2024-11-15 11:29:24.163842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.180 ms 00:25:46.854 [2024-11-15 11:29:24.163853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.175461] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:46.854 [2024-11-15 11:29:24.178879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.178915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:46.854 [2024-11-15 11:29:24.178931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.951 ms 00:25:46.854 [2024-11-15 11:29:24.178943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.179057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.179072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:46.854 [2024-11-15 11:29:24.179085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:46.854 [2024-11-15 11:29:24.179096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.179198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.179211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:46.854 [2024-11-15 11:29:24.179223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:46.854 [2024-11-15 11:29:24.179235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.179263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.179279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:46.854 [2024-11-15 11:29:24.179291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:46.854 [2024-11-15 11:29:24.179302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.854 [2024-11-15 11:29:24.179335] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:46.854 [2024-11-15 11:29:24.179349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.854 [2024-11-15 11:29:24.179360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:46.854 [2024-11-15 11:29:24.179371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:46.854 [2024-11-15 11:29:24.179393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.855 [2024-11-15 11:29:24.217343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.855 [2024-11-15 11:29:24.217399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:46.855 [2024-11-15 11:29:24.217416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.980 ms 00:25:46.855 [2024-11-15 11:29:24.217426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.855 [2024-11-15 11:29:24.217516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.855 [2024-11-15 11:29:24.217529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:46.855 [2024-11-15 11:29:24.217540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:46.855 [2024-11-15 11:29:24.217551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.855 [2024-11-15 11:29:24.218678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.575 ms, result 0 00:25:48.234  [2024-11-15T11:29:26.572Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-15T11:29:27.509Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-15T11:29:28.447Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-15T11:29:29.384Z] Copying: 96/1024 [MB] (23 MBps) [2024-11-15T11:29:30.321Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-15T11:29:31.258Z] Copying: 145/1024 [MB] (24 MBps) [2024-11-15T11:29:32.641Z] Copying: 170/1024 [MB] (24 MBps) [2024-11-15T11:29:33.219Z] Copying: 194/1024 [MB] (24 MBps) [2024-11-15T11:29:34.599Z] Copying: 218/1024 [MB] (23 MBps) [2024-11-15T11:29:35.540Z] Copying: 242/1024 [MB] (24 MBps) [2024-11-15T11:29:36.478Z] Copying: 266/1024 [MB] (23 MBps) [2024-11-15T11:29:37.417Z] Copying: 289/1024 [MB] (23 MBps) [2024-11-15T11:29:38.356Z] Copying: 312/1024 [MB] (23 MBps) [2024-11-15T11:29:39.293Z] Copying: 336/1024 [MB] (23 MBps) [2024-11-15T11:29:40.231Z] Copying: 359/1024 [MB] (23 MBps) [2024-11-15T11:29:41.611Z] Copying: 384/1024 [MB] (24 MBps) [2024-11-15T11:29:42.549Z] Copying: 408/1024 [MB] (24 MBps) [2024-11-15T11:29:43.487Z] Copying: 432/1024 [MB] (24 MBps) [2024-11-15T11:29:44.424Z] Copying: 455/1024 [MB] (22 MBps) [2024-11-15T11:29:45.384Z] Copying: 478/1024 [MB] (22 MBps) [2024-11-15T11:29:46.320Z] Copying: 502/1024 [MB] (24 MBps) [2024-11-15T11:29:47.253Z] Copying: 525/1024 [MB] (23 MBps) [2024-11-15T11:29:48.626Z] Copying: 550/1024 [MB] (24 MBps) [2024-11-15T11:29:49.193Z] Copying: 573/1024 [MB] (23 MBps) [2024-11-15T11:29:50.572Z] Copying: 596/1024 [MB] (22 MBps) [2024-11-15T11:29:51.507Z] Copying: 619/1024 [MB] (23 MBps) [2024-11-15T11:29:52.444Z] Copying: 644/1024 [MB] (24 MBps) [2024-11-15T11:29:53.381Z] Copying: 667/1024 [MB] (23 MBps) [2024-11-15T11:29:54.318Z] Copying: 691/1024 [MB] (23 MBps) [2024-11-15T11:29:55.327Z] Copying: 714/1024 [MB] (23 MBps) [2024-11-15T11:29:56.265Z] Copying: 737/1024 [MB] (22 MBps) [2024-11-15T11:29:57.203Z] Copying: 759/1024 [MB] (22 MBps) [2024-11-15T11:29:58.579Z] Copying: 781/1024 [MB] (22 MBps) [2024-11-15T11:29:59.516Z] Copying: 804/1024 [MB] (22 MBps) [2024-11-15T11:30:00.454Z] Copying: 827/1024 [MB] (22 MBps) [2024-11-15T11:30:01.391Z] Copying: 849/1024 [MB] (22 MBps) [2024-11-15T11:30:02.328Z] Copying: 872/1024 [MB] (22 MBps) [2024-11-15T11:30:03.263Z] Copying: 894/1024 [MB] (22 MBps) [2024-11-15T11:30:04.201Z] Copying: 918/1024 [MB] (24 MBps) [2024-11-15T11:30:05.635Z] Copying: 943/1024 [MB] (24 MBps) [2024-11-15T11:30:06.203Z] Copying: 967/1024 [MB] (23 MBps) [2024-11-15T11:30:07.582Z] Copying: 995/1024 [MB] (28 MBps) [2024-11-15T11:30:07.841Z] Copying: 1021/1024 [MB] (26 MBps) [2024-11-15T11:30:07.841Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-15 11:30:07.712037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.440 [2024-11-15 11:30:07.712237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:30.440 [2024-11-15 11:30:07.712330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:30.440 [2024-11-15 11:30:07.712369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.440 [2024-11-15 11:30:07.713344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:30.440 [2024-11-15 11:30:07.720026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.440 [2024-11-15 11:30:07.720161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:30.441 [2024-11-15 11:30:07.720320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.547 ms 00:26:30.441 [2024-11-15 11:30:07.720359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.441 [2024-11-15 11:30:07.731343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.441 [2024-11-15 11:30:07.731541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:30.441 [2024-11-15 11:30:07.731649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.189 ms 00:26:30.441 [2024-11-15 11:30:07.731687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.441 [2024-11-15 11:30:07.753291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.441 [2024-11-15 11:30:07.753484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:30.441 [2024-11-15 11:30:07.753586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.589 ms 00:26:30.441 [2024-11-15 11:30:07.753628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.441 [2024-11-15 11:30:07.758796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.441 [2024-11-15 11:30:07.758936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:30.441 [2024-11-15 11:30:07.759012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.007 ms 00:26:30.441 [2024-11-15 11:30:07.759048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.441 [2024-11-15 11:30:07.796341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.441 [2024-11-15 11:30:07.796480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:30.441 [2024-11-15 11:30:07.796573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.237 ms 00:26:30.441 [2024-11-15 11:30:07.796613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.441 [2024-11-15 11:30:07.820180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.441 [2024-11-15 11:30:07.820426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:30.441 [2024-11-15 11:30:07.820542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.504 ms 00:26:30.441 [2024-11-15 11:30:07.820605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.701 [2024-11-15 11:30:07.885650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.701 [2024-11-15 11:30:07.885846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:30.701 [2024-11-15 11:30:07.885930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.023 ms 00:26:30.701 [2024-11-15 11:30:07.885967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.701 [2024-11-15 11:30:07.924227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.701 [2024-11-15 11:30:07.924382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:30.701 [2024-11-15 11:30:07.924466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.272 ms 00:26:30.701 [2024-11-15 11:30:07.924482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.701 [2024-11-15 11:30:07.961840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.701 [2024-11-15 11:30:07.961881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:30.701 [2024-11-15 11:30:07.961896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.282 ms 00:26:30.701 [2024-11-15 11:30:07.961907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.701 [2024-11-15 11:30:07.998974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.701 [2024-11-15 11:30:07.999122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:30.701 [2024-11-15 11:30:07.999143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.085 ms 00:26:30.701 [2024-11-15 11:30:07.999154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.701 [2024-11-15 11:30:08.035501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.701 [2024-11-15 11:30:08.035541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:30.701 [2024-11-15 11:30:08.035565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.299 ms 00:26:30.701 [2024-11-15 11:30:08.035577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.701 [2024-11-15 11:30:08.035641] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:30.701 [2024-11-15 11:30:08.035662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 65536 / 261120 wr_cnt: 1 state: open 00:26:30.701 [2024-11-15 11:30:08.035676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.035992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:30.701 [2024-11-15 11:30:08.036175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:30.702 [2024-11-15 11:30:08.036760] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:30.702 [2024-11-15 11:30:08.036770] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a370d1e8-6e29-4106-877b-439f66871000 00:26:30.702 [2024-11-15 11:30:08.036781] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 65536 00:26:30.702 [2024-11-15 11:30:08.036796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 66496 00:26:30.702 [2024-11-15 11:30:08.036817] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 65536 00:26:30.702 [2024-11-15 11:30:08.036829] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:26:30.702 [2024-11-15 11:30:08.036839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:30.702 [2024-11-15 11:30:08.036849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:30.702 [2024-11-15 11:30:08.036859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:30.702 [2024-11-15 11:30:08.036868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:30.702 [2024-11-15 11:30:08.036878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:30.702 [2024-11-15 11:30:08.036888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.702 [2024-11-15 11:30:08.036898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:30.702 [2024-11-15 11:30:08.036908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:26:30.702 [2024-11-15 11:30:08.036918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.702 [2024-11-15 11:30:08.056813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.702 [2024-11-15 11:30:08.056850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:30.702 [2024-11-15 11:30:08.056863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.866 ms 00:26:30.702 [2024-11-15 11:30:08.056873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.702 [2024-11-15 11:30:08.057418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.702 [2024-11-15 11:30:08.057433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:30.702 [2024-11-15 11:30:08.057446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:26:30.702 [2024-11-15 11:30:08.057462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.110779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.110820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:30.962 [2024-11-15 11:30:08.110833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.110844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.110910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.110922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:30.962 [2024-11-15 11:30:08.110933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.110947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.111011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.111024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:30.962 [2024-11-15 11:30:08.111035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.111045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.111062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.111072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:30.962 [2024-11-15 11:30:08.111083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.111093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.237152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.237224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:30.962 [2024-11-15 11:30:08.237240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.237251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.339998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.340061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:30.962 [2024-11-15 11:30:08.340077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.340088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.340191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.340203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:30.962 [2024-11-15 11:30:08.340214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.340224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.340274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.340286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:30.962 [2024-11-15 11:30:08.340297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.340307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.340589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.340606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:30.962 [2024-11-15 11:30:08.340617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.340627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.340682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.340695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:30.962 [2024-11-15 11:30:08.340705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.340716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.340756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.340772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:30.962 [2024-11-15 11:30:08.340783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.340793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.340835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.962 [2024-11-15 11:30:08.340847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:30.962 [2024-11-15 11:30:08.340857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.962 [2024-11-15 11:30:08.340867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.962 [2024-11-15 11:30:08.341019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 631.647 ms, result 0 00:26:32.341 00:26:32.341 00:26:32.341 11:30:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:34.248 11:30:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:34.248 [2024-11-15 11:30:11.560391] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:26:34.248 [2024-11-15 11:30:11.560721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79699 ] 00:26:34.506 [2024-11-15 11:30:11.741376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.506 [2024-11-15 11:30:11.858340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.074 [2024-11-15 11:30:12.221961] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:35.074 [2024-11-15 11:30:12.222031] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:35.074 [2024-11-15 11:30:12.383800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.384062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:35.074 [2024-11-15 11:30:12.384096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:35.074 [2024-11-15 11:30:12.384107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.074 [2024-11-15 11:30:12.384168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.384181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:35.074 [2024-11-15 11:30:12.384197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:35.074 [2024-11-15 11:30:12.384206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.074 [2024-11-15 11:30:12.384230] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:35.074 [2024-11-15 11:30:12.385123] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:35.074 [2024-11-15 11:30:12.385149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.385160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:35.074 [2024-11-15 11:30:12.385172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:26:35.074 [2024-11-15 11:30:12.385182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.074 [2024-11-15 11:30:12.386647] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:35.074 [2024-11-15 11:30:12.405873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.405912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:35.074 [2024-11-15 11:30:12.405927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.258 ms 00:26:35.074 [2024-11-15 11:30:12.405938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.074 [2024-11-15 11:30:12.406003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.406016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:35.074 [2024-11-15 11:30:12.406027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:35.074 [2024-11-15 11:30:12.406036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.074 [2024-11-15 11:30:12.412885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.412914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:35.074 [2024-11-15 11:30:12.412927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.789 ms 00:26:35.074 [2024-11-15 11:30:12.412942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.074 [2024-11-15 11:30:12.413019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.413032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:35.074 [2024-11-15 11:30:12.413043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:35.074 [2024-11-15 11:30:12.413053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.074 [2024-11-15 11:30:12.413093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.074 [2024-11-15 11:30:12.413105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:35.074 [2024-11-15 11:30:12.413115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:35.074 [2024-11-15 11:30:12.413125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.075 [2024-11-15 11:30:12.413155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:35.075 [2024-11-15 11:30:12.418094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.075 [2024-11-15 11:30:12.418126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:35.075 [2024-11-15 11:30:12.418138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:26:35.075 [2024-11-15 11:30:12.418152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.075 [2024-11-15 11:30:12.418189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.075 [2024-11-15 11:30:12.418201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:35.075 [2024-11-15 11:30:12.418213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:35.075 [2024-11-15 11:30:12.418222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.075 [2024-11-15 11:30:12.418290] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:35.075 [2024-11-15 11:30:12.418313] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:35.075 [2024-11-15 11:30:12.418349] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:35.075 [2024-11-15 11:30:12.418370] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:35.075 [2024-11-15 11:30:12.418459] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:35.075 [2024-11-15 11:30:12.418472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:35.075 [2024-11-15 11:30:12.418485] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:35.075 [2024-11-15 11:30:12.418500] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:35.075 [2024-11-15 11:30:12.418512] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:35.075 [2024-11-15 11:30:12.418523] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:35.075 [2024-11-15 11:30:12.418533] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:35.075 [2024-11-15 11:30:12.418543] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:35.075 [2024-11-15 11:30:12.418556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:35.075 [2024-11-15 11:30:12.418567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.075 [2024-11-15 11:30:12.418595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:35.075 [2024-11-15 11:30:12.418606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:26:35.075 [2024-11-15 11:30:12.418617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.075 [2024-11-15 11:30:12.418694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.075 [2024-11-15 11:30:12.418705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:35.075 [2024-11-15 11:30:12.418716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:35.075 [2024-11-15 11:30:12.418726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.075 [2024-11-15 11:30:12.418825] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:35.075 [2024-11-15 11:30:12.418839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:35.075 [2024-11-15 11:30:12.418850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:35.075 [2024-11-15 11:30:12.418861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.075 [2024-11-15 11:30:12.418871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:35.075 [2024-11-15 11:30:12.418880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:35.075 [2024-11-15 11:30:12.418890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:35.075 [2024-11-15 11:30:12.418899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:35.075 [2024-11-15 11:30:12.418909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:35.075 [2024-11-15 11:30:12.418918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:35.075 [2024-11-15 11:30:12.418927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:35.075 [2024-11-15 11:30:12.418941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:35.075 [2024-11-15 11:30:12.418950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:35.075 [2024-11-15 11:30:12.418960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:35.075 [2024-11-15 11:30:12.418970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:35.075 [2024-11-15 11:30:12.418988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.075 [2024-11-15 11:30:12.418997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:35.075 [2024-11-15 11:30:12.419006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:35.075 [2024-11-15 11:30:12.419015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:35.075 [2024-11-15 11:30:12.419034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.075 [2024-11-15 11:30:12.419052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:35.075 [2024-11-15 11:30:12.419062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.075 [2024-11-15 11:30:12.419080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:35.075 [2024-11-15 11:30:12.419089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.075 [2024-11-15 11:30:12.419107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:35.075 [2024-11-15 11:30:12.419117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.075 [2024-11-15 11:30:12.419135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:35.075 [2024-11-15 11:30:12.419143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:35.075 [2024-11-15 11:30:12.419161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:35.075 [2024-11-15 11:30:12.419170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:35.075 [2024-11-15 11:30:12.419179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:35.075 [2024-11-15 11:30:12.419189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:35.075 [2024-11-15 11:30:12.419198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:35.075 [2024-11-15 11:30:12.419206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:35.075 [2024-11-15 11:30:12.419225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:35.075 [2024-11-15 11:30:12.419233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419244] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:35.075 [2024-11-15 11:30:12.419254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:35.075 [2024-11-15 11:30:12.419264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:35.075 [2024-11-15 11:30:12.419274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.075 [2024-11-15 11:30:12.419284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:35.075 [2024-11-15 11:30:12.419293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:35.075 [2024-11-15 11:30:12.419302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:35.075 [2024-11-15 11:30:12.419311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:35.075 [2024-11-15 11:30:12.419320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:35.075 [2024-11-15 11:30:12.419329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:35.075 [2024-11-15 11:30:12.419340] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:35.075 [2024-11-15 11:30:12.419352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:35.075 [2024-11-15 11:30:12.419363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:35.075 [2024-11-15 11:30:12.419374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:35.075 [2024-11-15 11:30:12.419384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:35.075 [2024-11-15 11:30:12.419394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:35.075 [2024-11-15 11:30:12.419404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:35.075 [2024-11-15 11:30:12.419414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:35.075 [2024-11-15 11:30:12.419425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:35.075 [2024-11-15 11:30:12.419435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:35.075 [2024-11-15 11:30:12.419445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:35.075 [2024-11-15 11:30:12.419455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:35.075 [2024-11-15 11:30:12.419465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:35.075 [2024-11-15 11:30:12.419476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:35.075 [2024-11-15 11:30:12.419486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:35.075 [2024-11-15 11:30:12.419496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:35.076 [2024-11-15 11:30:12.419506] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:35.076 [2024-11-15 11:30:12.419520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:35.076 [2024-11-15 11:30:12.419531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:35.076 [2024-11-15 11:30:12.419541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:35.076 [2024-11-15 11:30:12.419552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:35.076 [2024-11-15 11:30:12.419573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:35.076 [2024-11-15 11:30:12.419587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.076 [2024-11-15 11:30:12.419598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:35.076 [2024-11-15 11:30:12.419608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:26:35.076 [2024-11-15 11:30:12.419617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.076 [2024-11-15 11:30:12.458487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.076 [2024-11-15 11:30:12.458528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:35.076 [2024-11-15 11:30:12.458543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.886 ms 00:26:35.076 [2024-11-15 11:30:12.458563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.076 [2024-11-15 11:30:12.458651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.076 [2024-11-15 11:30:12.458662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:35.076 [2024-11-15 11:30:12.458673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:35.076 [2024-11-15 11:30:12.458684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.517123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.517307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:35.334 [2024-11-15 11:30:12.517332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.471 ms 00:26:35.334 [2024-11-15 11:30:12.517346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.517397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.517411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:35.334 [2024-11-15 11:30:12.517431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:35.334 [2024-11-15 11:30:12.517444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.517965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.517983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:35.334 [2024-11-15 11:30:12.517998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:26:35.334 [2024-11-15 11:30:12.518011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.518145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.518162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:35.334 [2024-11-15 11:30:12.518176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:26:35.334 [2024-11-15 11:30:12.518207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.537855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.538001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:35.334 [2024-11-15 11:30:12.538082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.656 ms 00:26:35.334 [2024-11-15 11:30:12.538118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.557172] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:35.334 [2024-11-15 11:30:12.557329] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:35.334 [2024-11-15 11:30:12.557483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.557516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:35.334 [2024-11-15 11:30:12.557547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.247 ms 00:26:35.334 [2024-11-15 11:30:12.557604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.587769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.587926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:35.334 [2024-11-15 11:30:12.588009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.151 ms 00:26:35.334 [2024-11-15 11:30:12.588047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.607345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.607493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:35.334 [2024-11-15 11:30:12.607575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.220 ms 00:26:35.334 [2024-11-15 11:30:12.607613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.625277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.625404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:35.334 [2024-11-15 11:30:12.625496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:26:35.334 [2024-11-15 11:30:12.625530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.626510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.626653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:35.334 [2024-11-15 11:30:12.626731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:26:35.334 [2024-11-15 11:30:12.626774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.334 [2024-11-15 11:30:12.724413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.334 [2024-11-15 11:30:12.724619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:35.334 [2024-11-15 11:30:12.724786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.751 ms 00:26:35.334 [2024-11-15 11:30:12.724826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.735566] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:35.594 [2024-11-15 11:30:12.738755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.594 [2024-11-15 11:30:12.738879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:35.594 [2024-11-15 11:30:12.738952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.875 ms 00:26:35.594 [2024-11-15 11:30:12.738988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.739117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.594 [2024-11-15 11:30:12.739273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:35.594 [2024-11-15 11:30:12.739347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:35.594 [2024-11-15 11:30:12.739384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.740687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.594 [2024-11-15 11:30:12.740817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:35.594 [2024-11-15 11:30:12.740892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:26:35.594 [2024-11-15 11:30:12.740928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.741025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.594 [2024-11-15 11:30:12.741070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:35.594 [2024-11-15 11:30:12.741104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:35.594 [2024-11-15 11:30:12.741175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.741316] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:35.594 [2024-11-15 11:30:12.741398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.594 [2024-11-15 11:30:12.741437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:35.594 [2024-11-15 11:30:12.741593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:35.594 [2024-11-15 11:30:12.741635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.777861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.594 [2024-11-15 11:30:12.778009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:35.594 [2024-11-15 11:30:12.778093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.226 ms 00:26:35.594 [2024-11-15 11:30:12.778139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.778244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.594 [2024-11-15 11:30:12.778326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:35.594 [2024-11-15 11:30:12.778404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:35.594 [2024-11-15 11:30:12.778434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.594 [2024-11-15 11:30:12.779618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.989 ms, result 0 00:26:36.971  [2024-11-15T11:30:15.332Z] Copying: 1796/1048576 [kB] (1796 kBps) [2024-11-15T11:30:16.268Z] Copying: 5936/1048576 [kB] (4140 kBps) [2024-11-15T11:30:17.209Z] Copying: 33/1024 [MB] (27 MBps) [2024-11-15T11:30:18.146Z] Copying: 70/1024 [MB] (37 MBps) [2024-11-15T11:30:19.083Z] Copying: 107/1024 [MB] (37 MBps) [2024-11-15T11:30:20.018Z] Copying: 144/1024 [MB] (36 MBps) [2024-11-15T11:30:21.395Z] Copying: 181/1024 [MB] (37 MBps) [2024-11-15T11:30:22.331Z] Copying: 217/1024 [MB] (35 MBps) [2024-11-15T11:30:23.269Z] Copying: 253/1024 [MB] (35 MBps) [2024-11-15T11:30:24.206Z] Copying: 287/1024 [MB] (34 MBps) [2024-11-15T11:30:25.172Z] Copying: 320/1024 [MB] (33 MBps) [2024-11-15T11:30:26.109Z] Copying: 354/1024 [MB] (33 MBps) [2024-11-15T11:30:27.047Z] Copying: 388/1024 [MB] (34 MBps) [2024-11-15T11:30:27.984Z] Copying: 421/1024 [MB] (33 MBps) [2024-11-15T11:30:29.359Z] Copying: 454/1024 [MB] (32 MBps) [2024-11-15T11:30:30.297Z] Copying: 491/1024 [MB] (37 MBps) [2024-11-15T11:30:31.234Z] Copying: 525/1024 [MB] (34 MBps) [2024-11-15T11:30:32.172Z] Copying: 560/1024 [MB] (34 MBps) [2024-11-15T11:30:33.110Z] Copying: 593/1024 [MB] (32 MBps) [2024-11-15T11:30:34.048Z] Copying: 625/1024 [MB] (32 MBps) [2024-11-15T11:30:35.016Z] Copying: 658/1024 [MB] (32 MBps) [2024-11-15T11:30:36.394Z] Copying: 690/1024 [MB] (32 MBps) [2024-11-15T11:30:36.962Z] Copying: 722/1024 [MB] (31 MBps) [2024-11-15T11:30:38.340Z] Copying: 754/1024 [MB] (31 MBps) [2024-11-15T11:30:39.276Z] Copying: 785/1024 [MB] (31 MBps) [2024-11-15T11:30:40.235Z] Copying: 818/1024 [MB] (32 MBps) [2024-11-15T11:30:41.174Z] Copying: 850/1024 [MB] (32 MBps) [2024-11-15T11:30:42.111Z] Copying: 883/1024 [MB] (32 MBps) [2024-11-15T11:30:43.049Z] Copying: 914/1024 [MB] (31 MBps) [2024-11-15T11:30:43.986Z] Copying: 946/1024 [MB] (31 MBps) [2024-11-15T11:30:45.368Z] Copying: 978/1024 [MB] (31 MBps) [2024-11-15T11:30:45.627Z] Copying: 1010/1024 [MB] (31 MBps) [2024-11-15T11:30:46.563Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-15 11:30:46.462600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.162 [2024-11-15 11:30:46.462760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:09.162 [2024-11-15 11:30:46.462806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:09.162 [2024-11-15 11:30:46.462836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.162 [2024-11-15 11:30:46.462901] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:09.162 [2024-11-15 11:30:46.474066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.162 [2024-11-15 11:30:46.474152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:09.162 [2024-11-15 11:30:46.474193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.131 ms 00:27:09.162 [2024-11-15 11:30:46.474215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.162 [2024-11-15 11:30:46.474726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.162 [2024-11-15 11:30:46.474756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:09.162 [2024-11-15 11:30:46.474788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:27:09.162 [2024-11-15 11:30:46.474810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.162 [2024-11-15 11:30:46.491902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.162 [2024-11-15 11:30:46.491992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:09.162 [2024-11-15 11:30:46.492011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.079 ms 00:27:09.162 [2024-11-15 11:30:46.492023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.162 [2024-11-15 11:30:46.497204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.162 [2024-11-15 11:30:46.497287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:09.162 [2024-11-15 11:30:46.497315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.142 ms 00:27:09.162 [2024-11-15 11:30:46.497327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.162 [2024-11-15 11:30:46.540020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.162 [2024-11-15 11:30:46.540117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:09.162 [2024-11-15 11:30:46.540136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.662 ms 00:27:09.162 [2024-11-15 11:30:46.540148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-15 11:30:46.563931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-15 11:30:46.564016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:09.422 [2024-11-15 11:30:46.564037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.722 ms 00:27:09.422 [2024-11-15 11:30:46.564049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-15 11:30:46.566675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-15 11:30:46.566847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:09.422 [2024-11-15 11:30:46.566873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:27:09.422 [2024-11-15 11:30:46.566885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-15 11:30:46.606969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-15 11:30:46.607042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:09.422 [2024-11-15 11:30:46.607061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.105 ms 00:27:09.422 [2024-11-15 11:30:46.607073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-15 11:30:46.643635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-15 11:30:46.643817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:09.422 [2024-11-15 11:30:46.643863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.568 ms 00:27:09.422 [2024-11-15 11:30:46.643878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-15 11:30:46.678954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-15 11:30:46.679110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:09.422 [2024-11-15 11:30:46.679134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.085 ms 00:27:09.422 [2024-11-15 11:30:46.679145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-15 11:30:46.714473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-15 11:30:46.714643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.422 [2024-11-15 11:30:46.714667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.260 ms 00:27:09.422 [2024-11-15 11:30:46.714680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-15 11:30:46.714722] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.422 [2024-11-15 11:30:46.714744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:09.422 [2024-11-15 11:30:46.714763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:09.422 [2024-11-15 11:30:46.714778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.422 [2024-11-15 11:30:46.714792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.422 [2024-11-15 11:30:46.714807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.422 [2024-11-15 11:30:46.714822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.422 [2024-11-15 11:30:46.714837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.422 [2024-11-15 11:30:46.714857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.422 [2024-11-15 11:30:46.714871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.714999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.423 [2024-11-15 11:30:46.715873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.424 [2024-11-15 11:30:46.715885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.424 [2024-11-15 11:30:46.715896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.424 [2024-11-15 11:30:46.715908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.424 [2024-11-15 11:30:46.715918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.424 [2024-11-15 11:30:46.715930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.424 [2024-11-15 11:30:46.715950] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.424 [2024-11-15 11:30:46.715961] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a370d1e8-6e29-4106-877b-439f66871000 00:27:09.424 [2024-11-15 11:30:46.715972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:09.424 [2024-11-15 11:30:46.715983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 199104 00:27:09.424 [2024-11-15 11:30:46.715994] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 197120 00:27:09.424 [2024-11-15 11:30:46.716011] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0101 00:27:09.424 [2024-11-15 11:30:46.716021] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.424 [2024-11-15 11:30:46.716032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.424 [2024-11-15 11:30:46.716043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.424 [2024-11-15 11:30:46.716063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.424 [2024-11-15 11:30:46.716073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.424 [2024-11-15 11:30:46.716085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.424 [2024-11-15 11:30:46.716096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.424 [2024-11-15 11:30:46.716107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:27:09.424 [2024-11-15 11:30:46.716117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.424 [2024-11-15 11:30:46.737221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.424 [2024-11-15 11:30:46.737362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.424 [2024-11-15 11:30:46.737476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.095 ms 00:27:09.424 [2024-11-15 11:30:46.737515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.424 [2024-11-15 11:30:46.738214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.424 [2024-11-15 11:30:46.738319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.424 [2024-11-15 11:30:46.738437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:27:09.424 [2024-11-15 11:30:46.738473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.424 [2024-11-15 11:30:46.792943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.424 [2024-11-15 11:30:46.793201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.424 [2024-11-15 11:30:46.793287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.424 [2024-11-15 11:30:46.793325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.424 [2024-11-15 11:30:46.793451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.424 [2024-11-15 11:30:46.793485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.424 [2024-11-15 11:30:46.793518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.424 [2024-11-15 11:30:46.793615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.424 [2024-11-15 11:30:46.793797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.424 [2024-11-15 11:30:46.793845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.424 [2024-11-15 11:30:46.793988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.424 [2024-11-15 11:30:46.794102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.424 [2024-11-15 11:30:46.794162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.424 [2024-11-15 11:30:46.794214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.424 [2024-11-15 11:30:46.794252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.424 [2024-11-15 11:30:46.794288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:46.928703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:46.928946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.683 [2024-11-15 11:30:46.929102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:46.929142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.038677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:47.039017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.683 [2024-11-15 11:30:47.039110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:47.039148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.039307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:47.039358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:09.683 [2024-11-15 11:30:47.039431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:47.039467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.039589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:47.039629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:09.683 [2024-11-15 11:30:47.039663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:47.039749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.039931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:47.039970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:09.683 [2024-11-15 11:30:47.040096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:47.040197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.040294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:47.040330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:09.683 [2024-11-15 11:30:47.040414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:47.040449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.040531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:47.040583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:09.683 [2024-11-15 11:30:47.040660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:47.040705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.040794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.683 [2024-11-15 11:30:47.040830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:09.683 [2024-11-15 11:30:47.040862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.683 [2024-11-15 11:30:47.040969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-15 11:30:47.041145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 579.507 ms, result 0 00:27:11.061 00:27:11.061 00:27:11.061 11:30:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:12.965 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:12.965 11:30:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:12.965 [2024-11-15 11:30:50.107649] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:27:12.965 [2024-11-15 11:30:50.107800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80082 ] 00:27:12.965 [2024-11-15 11:30:50.296887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.224 [2024-11-15 11:30:50.441899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.483 [2024-11-15 11:30:50.874785] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:13.483 [2024-11-15 11:30:50.874891] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:13.743 [2024-11-15 11:30:51.043232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.043318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:13.743 [2024-11-15 11:30:51.043347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:13.743 [2024-11-15 11:30:51.043362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.043427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.043444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:13.743 [2024-11-15 11:30:51.043461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:13.743 [2024-11-15 11:30:51.043475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.043502] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:13.743 [2024-11-15 11:30:51.044519] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:13.743 [2024-11-15 11:30:51.044553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.044578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:13.743 [2024-11-15 11:30:51.044593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:27:13.743 [2024-11-15 11:30:51.044607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.046992] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:13.743 [2024-11-15 11:30:51.067049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.067092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:13.743 [2024-11-15 11:30:51.067108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.091 ms 00:27:13.743 [2024-11-15 11:30:51.067119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.067193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.067207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:13.743 [2024-11-15 11:30:51.067220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:13.743 [2024-11-15 11:30:51.067230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.079395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.079432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:13.743 [2024-11-15 11:30:51.079446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.107 ms 00:27:13.743 [2024-11-15 11:30:51.079462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.079552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.079580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:13.743 [2024-11-15 11:30:51.079608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:13.743 [2024-11-15 11:30:51.079619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.079692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.079706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:13.743 [2024-11-15 11:30:51.079717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:13.743 [2024-11-15 11:30:51.079743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.079776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:13.743 [2024-11-15 11:30:51.085581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.085733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:13.743 [2024-11-15 11:30:51.085756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:27:13.743 [2024-11-15 11:30:51.085774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.085813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.743 [2024-11-15 11:30:51.085825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:13.743 [2024-11-15 11:30:51.085837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:13.743 [2024-11-15 11:30:51.085848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.743 [2024-11-15 11:30:51.085887] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:13.743 [2024-11-15 11:30:51.085913] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:13.743 [2024-11-15 11:30:51.085952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:13.743 [2024-11-15 11:30:51.085976] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:13.743 [2024-11-15 11:30:51.086071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:13.743 [2024-11-15 11:30:51.086085] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:13.743 [2024-11-15 11:30:51.086098] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:13.743 [2024-11-15 11:30:51.086112] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:13.743 [2024-11-15 11:30:51.086124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:13.743 [2024-11-15 11:30:51.086136] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:13.743 [2024-11-15 11:30:51.086147] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:13.744 [2024-11-15 11:30:51.086157] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:13.744 [2024-11-15 11:30:51.086172] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:13.744 [2024-11-15 11:30:51.086194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.744 [2024-11-15 11:30:51.086205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:13.744 [2024-11-15 11:30:51.086216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:27:13.744 [2024-11-15 11:30:51.086227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.744 [2024-11-15 11:30:51.086300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.744 [2024-11-15 11:30:51.086310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:13.744 [2024-11-15 11:30:51.086321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:13.744 [2024-11-15 11:30:51.086331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.744 [2024-11-15 11:30:51.086438] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:13.744 [2024-11-15 11:30:51.086453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:13.744 [2024-11-15 11:30:51.086465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:13.744 [2024-11-15 11:30:51.086497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:13.744 [2024-11-15 11:30:51.086529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:13.744 [2024-11-15 11:30:51.086550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:13.744 [2024-11-15 11:30:51.086578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:13.744 [2024-11-15 11:30:51.086589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:13.744 [2024-11-15 11:30:51.086599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:13.744 [2024-11-15 11:30:51.086609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:13.744 [2024-11-15 11:30:51.086630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:13.744 [2024-11-15 11:30:51.086650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:13.744 [2024-11-15 11:30:51.086681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:13.744 [2024-11-15 11:30:51.086710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:13.744 [2024-11-15 11:30:51.086738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:13.744 [2024-11-15 11:30:51.086767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:13.744 [2024-11-15 11:30:51.086794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:13.744 [2024-11-15 11:30:51.086812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:13.744 [2024-11-15 11:30:51.086822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:13.744 [2024-11-15 11:30:51.086831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:13.744 [2024-11-15 11:30:51.086840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:13.744 [2024-11-15 11:30:51.086850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:13.744 [2024-11-15 11:30:51.086859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:13.744 [2024-11-15 11:30:51.086881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:13.744 [2024-11-15 11:30:51.086890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086900] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:13.744 [2024-11-15 11:30:51.086910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:13.744 [2024-11-15 11:30:51.086921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:13.744 [2024-11-15 11:30:51.086931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.744 [2024-11-15 11:30:51.086943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:13.744 [2024-11-15 11:30:51.086953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:13.744 [2024-11-15 11:30:51.086962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:13.744 [2024-11-15 11:30:51.086972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:13.744 [2024-11-15 11:30:51.086981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:13.744 [2024-11-15 11:30:51.086992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:13.744 [2024-11-15 11:30:51.087004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:13.744 [2024-11-15 11:30:51.087018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.744 [2024-11-15 11:30:51.087030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:13.744 [2024-11-15 11:30:51.087042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:13.744 [2024-11-15 11:30:51.087053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:13.744 [2024-11-15 11:30:51.087064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:13.744 [2024-11-15 11:30:51.087074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:13.744 [2024-11-15 11:30:51.087086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:13.744 [2024-11-15 11:30:51.087096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:13.744 [2024-11-15 11:30:51.087106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:13.744 [2024-11-15 11:30:51.087116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:13.744 [2024-11-15 11:30:51.087126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:13.744 [2024-11-15 11:30:51.087137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:13.744 [2024-11-15 11:30:51.087147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:13.744 [2024-11-15 11:30:51.087157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:13.744 [2024-11-15 11:30:51.087167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:13.744 [2024-11-15 11:30:51.087178] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:13.744 [2024-11-15 11:30:51.087194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.744 [2024-11-15 11:30:51.087206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:13.744 [2024-11-15 11:30:51.087217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:13.744 [2024-11-15 11:30:51.087228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:13.744 [2024-11-15 11:30:51.087239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:13.744 [2024-11-15 11:30:51.087251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.744 [2024-11-15 11:30:51.087262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:13.744 [2024-11-15 11:30:51.087274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:27:13.745 [2024-11-15 11:30:51.087284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.745 [2024-11-15 11:30:51.135809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.745 [2024-11-15 11:30:51.135853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:13.745 [2024-11-15 11:30:51.135868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.552 ms 00:27:13.745 [2024-11-15 11:30:51.135879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.745 [2024-11-15 11:30:51.135972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.745 [2024-11-15 11:30:51.135984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:13.745 [2024-11-15 11:30:51.135995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:13.745 [2024-11-15 11:30:51.136005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.199901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.199948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.004 [2024-11-15 11:30:51.199964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.909 ms 00:27:14.004 [2024-11-15 11:30:51.199975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.200020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.200033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.004 [2024-11-15 11:30:51.200050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:14.004 [2024-11-15 11:30:51.200061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.200861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.200877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.004 [2024-11-15 11:30:51.200889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:27:14.004 [2024-11-15 11:30:51.200900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.201037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.201052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.004 [2024-11-15 11:30:51.201064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:27:14.004 [2024-11-15 11:30:51.201082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.224200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.224378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.004 [2024-11-15 11:30:51.224408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.132 ms 00:27:14.004 [2024-11-15 11:30:51.224420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.244926] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:14.004 [2024-11-15 11:30:51.244968] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:14.004 [2024-11-15 11:30:51.244984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.244996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:14.004 [2024-11-15 11:30:51.245009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.474 ms 00:27:14.004 [2024-11-15 11:30:51.245019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.273928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.274081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:14.004 [2024-11-15 11:30:51.274106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.909 ms 00:27:14.004 [2024-11-15 11:30:51.274122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.292602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.292641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:14.004 [2024-11-15 11:30:51.292654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.464 ms 00:27:14.004 [2024-11-15 11:30:51.292681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.311156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.311194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:14.004 [2024-11-15 11:30:51.311208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.465 ms 00:27:14.004 [2024-11-15 11:30:51.311219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.004 [2024-11-15 11:30:51.312062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.004 [2024-11-15 11:30:51.312201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:14.004 [2024-11-15 11:30:51.312223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:27:14.004 [2024-11-15 11:30:51.312241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.406571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.406826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:14.264 [2024-11-15 11:30:51.406863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.452 ms 00:27:14.264 [2024-11-15 11:30:51.406876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.418453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:14.264 [2024-11-15 11:30:51.422094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.422127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:14.264 [2024-11-15 11:30:51.422142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.098 ms 00:27:14.264 [2024-11-15 11:30:51.422169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.422308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.422323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:14.264 [2024-11-15 11:30:51.422336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:14.264 [2024-11-15 11:30:51.422351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.423756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.423782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:14.264 [2024-11-15 11:30:51.423795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:27:14.264 [2024-11-15 11:30:51.423807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.423838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.423850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:14.264 [2024-11-15 11:30:51.423862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:14.264 [2024-11-15 11:30:51.423872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.423919] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:14.264 [2024-11-15 11:30:51.423933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.423944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:14.264 [2024-11-15 11:30:51.423955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:14.264 [2024-11-15 11:30:51.423966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.461931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.461979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:14.264 [2024-11-15 11:30:51.462005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.005 ms 00:27:14.264 [2024-11-15 11:30:51.462022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.462106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.264 [2024-11-15 11:30:51.462120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:14.264 [2024-11-15 11:30:51.462132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:14.264 [2024-11-15 11:30:51.462143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.264 [2024-11-15 11:30:51.463701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.603 ms, result 0 00:27:15.655  [2024-11-15T11:30:53.993Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-15T11:30:54.930Z] Copying: 53/1024 [MB] (26 MBps) [2024-11-15T11:30:55.870Z] Copying: 81/1024 [MB] (28 MBps) [2024-11-15T11:30:56.805Z] Copying: 110/1024 [MB] (28 MBps) [2024-11-15T11:30:57.742Z] Copying: 137/1024 [MB] (27 MBps) [2024-11-15T11:30:59.117Z] Copying: 165/1024 [MB] (28 MBps) [2024-11-15T11:30:59.687Z] Copying: 197/1024 [MB] (31 MBps) [2024-11-15T11:31:01.066Z] Copying: 228/1024 [MB] (31 MBps) [2024-11-15T11:31:01.708Z] Copying: 253/1024 [MB] (24 MBps) [2024-11-15T11:31:03.087Z] Copying: 278/1024 [MB] (25 MBps) [2024-11-15T11:31:04.025Z] Copying: 305/1024 [MB] (26 MBps) [2024-11-15T11:31:04.963Z] Copying: 331/1024 [MB] (26 MBps) [2024-11-15T11:31:05.900Z] Copying: 356/1024 [MB] (25 MBps) [2024-11-15T11:31:06.836Z] Copying: 382/1024 [MB] (26 MBps) [2024-11-15T11:31:07.772Z] Copying: 409/1024 [MB] (26 MBps) [2024-11-15T11:31:08.708Z] Copying: 435/1024 [MB] (26 MBps) [2024-11-15T11:31:10.105Z] Copying: 462/1024 [MB] (26 MBps) [2024-11-15T11:31:10.673Z] Copying: 491/1024 [MB] (29 MBps) [2024-11-15T11:31:12.051Z] Copying: 518/1024 [MB] (27 MBps) [2024-11-15T11:31:12.987Z] Copying: 545/1024 [MB] (26 MBps) [2024-11-15T11:31:13.919Z] Copying: 574/1024 [MB] (28 MBps) [2024-11-15T11:31:14.850Z] Copying: 608/1024 [MB] (34 MBps) [2024-11-15T11:31:15.782Z] Copying: 643/1024 [MB] (35 MBps) [2024-11-15T11:31:16.718Z] Copying: 677/1024 [MB] (33 MBps) [2024-11-15T11:31:17.654Z] Copying: 704/1024 [MB] (27 MBps) [2024-11-15T11:31:19.032Z] Copying: 731/1024 [MB] (26 MBps) [2024-11-15T11:31:19.965Z] Copying: 757/1024 [MB] (25 MBps) [2024-11-15T11:31:20.899Z] Copying: 782/1024 [MB] (24 MBps) [2024-11-15T11:31:21.834Z] Copying: 807/1024 [MB] (25 MBps) [2024-11-15T11:31:22.771Z] Copying: 834/1024 [MB] (27 MBps) [2024-11-15T11:31:23.708Z] Copying: 863/1024 [MB] (29 MBps) [2024-11-15T11:31:24.642Z] Copying: 893/1024 [MB] (29 MBps) [2024-11-15T11:31:26.015Z] Copying: 922/1024 [MB] (28 MBps) [2024-11-15T11:31:26.951Z] Copying: 952/1024 [MB] (30 MBps) [2024-11-15T11:31:27.886Z] Copying: 979/1024 [MB] (27 MBps) [2024-11-15T11:31:28.455Z] Copying: 1005/1024 [MB] (25 MBps) [2024-11-15T11:31:28.714Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-15 11:31:28.533014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.533103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:51.313 [2024-11-15 11:31:28.533135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:51.313 [2024-11-15 11:31:28.533158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.533212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:51.313 [2024-11-15 11:31:28.540287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.540334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:51.313 [2024-11-15 11:31:28.540359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.051 ms 00:27:51.313 [2024-11-15 11:31:28.540374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.540712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.540736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:51.313 [2024-11-15 11:31:28.540752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:27:51.313 [2024-11-15 11:31:28.540767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.544666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.544698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:51.313 [2024-11-15 11:31:28.544714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.885 ms 00:27:51.313 [2024-11-15 11:31:28.544728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.550431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.550605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:51.313 [2024-11-15 11:31:28.550628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.679 ms 00:27:51.313 [2024-11-15 11:31:28.550640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.588412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.588453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:51.313 [2024-11-15 11:31:28.588467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.762 ms 00:27:51.313 [2024-11-15 11:31:28.588494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.609809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.609950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:51.313 [2024-11-15 11:31:28.609970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.307 ms 00:27:51.313 [2024-11-15 11:31:28.609998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.611994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.612039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:51.313 [2024-11-15 11:31:28.612053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.958 ms 00:27:51.313 [2024-11-15 11:31:28.612063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.648449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.648486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:51.313 [2024-11-15 11:31:28.648500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.428 ms 00:27:51.313 [2024-11-15 11:31:28.648510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.313 [2024-11-15 11:31:28.685077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.313 [2024-11-15 11:31:28.685128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:51.313 [2024-11-15 11:31:28.685141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.588 ms 00:27:51.313 [2024-11-15 11:31:28.685150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.573 [2024-11-15 11:31:28.722190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.573 [2024-11-15 11:31:28.722270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:51.573 [2024-11-15 11:31:28.722283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.061 ms 00:27:51.573 [2024-11-15 11:31:28.722294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.573 [2024-11-15 11:31:28.758854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.573 [2024-11-15 11:31:28.758900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:51.573 [2024-11-15 11:31:28.758914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.537 ms 00:27:51.573 [2024-11-15 11:31:28.758924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.573 [2024-11-15 11:31:28.758963] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:51.573 [2024-11-15 11:31:28.758981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:51.573 [2024-11-15 11:31:28.759000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:51.573 [2024-11-15 11:31:28.759013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:51.573 [2024-11-15 11:31:28.759024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:51.573 [2024-11-15 11:31:28.759035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:51.573 [2024-11-15 11:31:28.759046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:51.573 [2024-11-15 11:31:28.759057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:51.573 [2024-11-15 11:31:28.759068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:51.573 [2024-11-15 11:31:28.759079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:51.573 [2024-11-15 11:31:28.759089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.759994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:51.574 [2024-11-15 11:31:28.760101] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:51.575 [2024-11-15 11:31:28.760116] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a370d1e8-6e29-4106-877b-439f66871000 00:27:51.575 [2024-11-15 11:31:28.760128] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:51.575 [2024-11-15 11:31:28.760138] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:51.575 [2024-11-15 11:31:28.760148] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:51.575 [2024-11-15 11:31:28.760158] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:51.575 [2024-11-15 11:31:28.760169] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:51.575 [2024-11-15 11:31:28.760179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:51.575 [2024-11-15 11:31:28.760200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:51.575 [2024-11-15 11:31:28.760209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:51.575 [2024-11-15 11:31:28.760219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:51.575 [2024-11-15 11:31:28.760228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.575 [2024-11-15 11:31:28.760241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:51.575 [2024-11-15 11:31:28.760252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:27:51.575 [2024-11-15 11:31:28.760262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.575 [2024-11-15 11:31:28.780169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.575 [2024-11-15 11:31:28.780215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:51.575 [2024-11-15 11:31:28.780234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.897 ms 00:27:51.575 [2024-11-15 11:31:28.780251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.575 [2024-11-15 11:31:28.780805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.575 [2024-11-15 11:31:28.780826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:51.575 [2024-11-15 11:31:28.780856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:27:51.575 [2024-11-15 11:31:28.780868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.575 [2024-11-15 11:31:28.833187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.575 [2024-11-15 11:31:28.833234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:51.575 [2024-11-15 11:31:28.833250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.575 [2024-11-15 11:31:28.833261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.575 [2024-11-15 11:31:28.833324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.575 [2024-11-15 11:31:28.833336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:51.575 [2024-11-15 11:31:28.833352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.575 [2024-11-15 11:31:28.833363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.575 [2024-11-15 11:31:28.833431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.575 [2024-11-15 11:31:28.833445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:51.575 [2024-11-15 11:31:28.833456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.575 [2024-11-15 11:31:28.833467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.575 [2024-11-15 11:31:28.833484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.575 [2024-11-15 11:31:28.833495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:51.575 [2024-11-15 11:31:28.833505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.575 [2024-11-15 11:31:28.833520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.575 [2024-11-15 11:31:28.957079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.575 [2024-11-15 11:31:28.957147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:51.575 [2024-11-15 11:31:28.957163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.575 [2024-11-15 11:31:28.957174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.057404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.835 [2024-11-15 11:31:29.057468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:51.835 [2024-11-15 11:31:29.057506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.835 [2024-11-15 11:31:29.057517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.057813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.835 [2024-11-15 11:31:29.057867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.835 [2024-11-15 11:31:29.057894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.835 [2024-11-15 11:31:29.057905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.057963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.835 [2024-11-15 11:31:29.057975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.835 [2024-11-15 11:31:29.057986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.835 [2024-11-15 11:31:29.057997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.058111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.835 [2024-11-15 11:31:29.058126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.835 [2024-11-15 11:31:29.058137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.835 [2024-11-15 11:31:29.058146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.058192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.835 [2024-11-15 11:31:29.058206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:51.835 [2024-11-15 11:31:29.058216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.835 [2024-11-15 11:31:29.058226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.058270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.835 [2024-11-15 11:31:29.058282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.835 [2024-11-15 11:31:29.058292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.835 [2024-11-15 11:31:29.058303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.058343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.835 [2024-11-15 11:31:29.058356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:51.835 [2024-11-15 11:31:29.058366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.835 [2024-11-15 11:31:29.058376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.835 [2024-11-15 11:31:29.058496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.323 ms, result 0 00:27:52.773 00:27:52.773 00:27:52.773 11:31:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:54.683 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:54.683 11:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:54.683 11:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:54.683 11:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:54.683 11:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78230 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78230 ']' 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 78230 00:27:54.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78230) - No such process 00:27:54.942 Process with pid 78230 is not found 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 78230 is not found' 00:27:54.942 11:31:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:55.201 Remove shared memory files 00:27:55.201 11:31:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:55.201 11:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:55.201 11:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:55.201 11:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:55.201 11:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:55.201 11:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:55.201 11:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:55.201 ************************************ 00:27:55.201 END TEST ftl_dirty_shutdown 00:27:55.201 ************************************ 00:27:55.201 00:27:55.202 real 3m39.173s 00:27:55.202 user 4m7.122s 00:27:55.202 sys 0m40.500s 00:27:55.202 11:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:55.202 11:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.461 11:31:32 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:55.461 11:31:32 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:55.461 11:31:32 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:55.461 11:31:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:55.461 ************************************ 00:27:55.461 START TEST ftl_upgrade_shutdown 00:27:55.461 ************************************ 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:55.461 * Looking for test storage... 00:27:55.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.461 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.721 --rc genhtml_branch_coverage=1 00:27:55.721 --rc genhtml_function_coverage=1 00:27:55.721 --rc genhtml_legend=1 00:27:55.721 --rc geninfo_all_blocks=1 00:27:55.721 --rc geninfo_unexecuted_blocks=1 00:27:55.721 00:27:55.721 ' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.721 --rc genhtml_branch_coverage=1 00:27:55.721 --rc genhtml_function_coverage=1 00:27:55.721 --rc genhtml_legend=1 00:27:55.721 --rc geninfo_all_blocks=1 00:27:55.721 --rc geninfo_unexecuted_blocks=1 00:27:55.721 00:27:55.721 ' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.721 --rc genhtml_branch_coverage=1 00:27:55.721 --rc genhtml_function_coverage=1 00:27:55.721 --rc genhtml_legend=1 00:27:55.721 --rc geninfo_all_blocks=1 00:27:55.721 --rc geninfo_unexecuted_blocks=1 00:27:55.721 00:27:55.721 ' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.721 --rc genhtml_branch_coverage=1 00:27:55.721 --rc genhtml_function_coverage=1 00:27:55.721 --rc genhtml_legend=1 00:27:55.721 --rc geninfo_all_blocks=1 00:27:55.721 --rc geninfo_unexecuted_blocks=1 00:27:55.721 00:27:55.721 ' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:55.721 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80591 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80591 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80591 ']' 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:55.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:55.722 11:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.722 [2024-11-15 11:31:33.031904] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:27:55.722 [2024-11-15 11:31:33.032291] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80591 ] 00:27:55.981 [2024-11-15 11:31:33.219671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.981 [2024-11-15 11:31:33.333692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:56.918 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:57.176 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:57.176 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:57.176 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:57.176 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:27:57.176 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:57.176 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:57.176 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:57.177 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:57.434 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:57.434 { 00:27:57.434 "name": "basen1", 00:27:57.434 "aliases": [ 00:27:57.434 "db6a0688-38b1-41ef-8844-72e62f940ee5" 00:27:57.434 ], 00:27:57.434 "product_name": "NVMe disk", 00:27:57.434 "block_size": 4096, 00:27:57.434 "num_blocks": 1310720, 00:27:57.434 "uuid": "db6a0688-38b1-41ef-8844-72e62f940ee5", 00:27:57.434 "numa_id": -1, 00:27:57.434 "assigned_rate_limits": { 00:27:57.434 "rw_ios_per_sec": 0, 00:27:57.434 "rw_mbytes_per_sec": 0, 00:27:57.434 "r_mbytes_per_sec": 0, 00:27:57.434 "w_mbytes_per_sec": 0 00:27:57.434 }, 00:27:57.434 "claimed": true, 00:27:57.434 "claim_type": "read_many_write_one", 00:27:57.434 "zoned": false, 00:27:57.434 "supported_io_types": { 00:27:57.434 "read": true, 00:27:57.434 "write": true, 00:27:57.434 "unmap": true, 00:27:57.434 "flush": true, 00:27:57.434 "reset": true, 00:27:57.434 "nvme_admin": true, 00:27:57.434 "nvme_io": true, 00:27:57.434 "nvme_io_md": false, 00:27:57.434 "write_zeroes": true, 00:27:57.434 "zcopy": false, 00:27:57.434 "get_zone_info": false, 00:27:57.434 "zone_management": false, 00:27:57.434 "zone_append": false, 00:27:57.434 "compare": true, 00:27:57.434 "compare_and_write": false, 00:27:57.434 "abort": true, 00:27:57.434 "seek_hole": false, 00:27:57.434 "seek_data": false, 00:27:57.434 "copy": true, 00:27:57.434 "nvme_iov_md": false 00:27:57.434 }, 00:27:57.434 "driver_specific": { 00:27:57.434 "nvme": [ 00:27:57.434 { 00:27:57.434 "pci_address": "0000:00:11.0", 00:27:57.434 "trid": { 00:27:57.434 "trtype": "PCIe", 00:27:57.434 "traddr": "0000:00:11.0" 00:27:57.434 }, 00:27:57.434 "ctrlr_data": { 00:27:57.434 "cntlid": 0, 00:27:57.434 "vendor_id": "0x1b36", 00:27:57.434 "model_number": "QEMU NVMe Ctrl", 00:27:57.434 "serial_number": "12341", 00:27:57.434 "firmware_revision": "8.0.0", 00:27:57.434 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:57.434 "oacs": { 00:27:57.434 "security": 0, 00:27:57.434 "format": 1, 00:27:57.434 "firmware": 0, 00:27:57.434 "ns_manage": 1 00:27:57.434 }, 00:27:57.434 "multi_ctrlr": false, 00:27:57.434 "ana_reporting": false 00:27:57.434 }, 00:27:57.434 "vs": { 00:27:57.434 "nvme_version": "1.4" 00:27:57.434 }, 00:27:57.434 "ns_data": { 00:27:57.434 "id": 1, 00:27:57.434 "can_share": false 00:27:57.434 } 00:27:57.434 } 00:27:57.434 ], 00:27:57.434 "mp_policy": "active_passive" 00:27:57.434 } 00:27:57.434 } 00:27:57.434 ]' 00:27:57.434 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:57.434 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:57.434 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:57.434 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:27:57.434 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:27:57.434 11:31:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:27:57.435 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:57.435 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:57.435 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:57.435 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:57.435 11:31:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:57.692 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=2c09184a-214f-4e53-b3d7-49ade7af5bc4 00:27:57.692 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:57.692 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c09184a-214f-4e53-b3d7-49ade7af5bc4 00:27:57.948 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:58.206 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=b3588499-660c-401b-8529-c165c6f338ca 00:27:58.206 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u b3588499-660c-401b-8529-c165c6f338ca 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 ]] 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 5120 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:58.464 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 00:27:58.722 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:58.722 { 00:27:58.722 "name": "f5f6f2d0-2233-4df7-bacb-8e439ccd1c62", 00:27:58.722 "aliases": [ 00:27:58.722 "lvs/basen1p0" 00:27:58.722 ], 00:27:58.722 "product_name": "Logical Volume", 00:27:58.722 "block_size": 4096, 00:27:58.722 "num_blocks": 5242880, 00:27:58.722 "uuid": "f5f6f2d0-2233-4df7-bacb-8e439ccd1c62", 00:27:58.722 "assigned_rate_limits": { 00:27:58.722 "rw_ios_per_sec": 0, 00:27:58.722 "rw_mbytes_per_sec": 0, 00:27:58.722 "r_mbytes_per_sec": 0, 00:27:58.722 "w_mbytes_per_sec": 0 00:27:58.722 }, 00:27:58.722 "claimed": false, 00:27:58.722 "zoned": false, 00:27:58.722 "supported_io_types": { 00:27:58.722 "read": true, 00:27:58.722 "write": true, 00:27:58.722 "unmap": true, 00:27:58.722 "flush": false, 00:27:58.722 "reset": true, 00:27:58.722 "nvme_admin": false, 00:27:58.722 "nvme_io": false, 00:27:58.722 "nvme_io_md": false, 00:27:58.722 "write_zeroes": true, 00:27:58.722 "zcopy": false, 00:27:58.722 "get_zone_info": false, 00:27:58.722 "zone_management": false, 00:27:58.722 "zone_append": false, 00:27:58.722 "compare": false, 00:27:58.722 "compare_and_write": false, 00:27:58.722 "abort": false, 00:27:58.722 "seek_hole": true, 00:27:58.722 "seek_data": true, 00:27:58.722 "copy": false, 00:27:58.722 "nvme_iov_md": false 00:27:58.722 }, 00:27:58.722 "driver_specific": { 00:27:58.722 "lvol": { 00:27:58.722 "lvol_store_uuid": "b3588499-660c-401b-8529-c165c6f338ca", 00:27:58.722 "base_bdev": "basen1", 00:27:58.722 "thin_provision": true, 00:27:58.722 "num_allocated_clusters": 0, 00:27:58.722 "snapshot": false, 00:27:58.722 "clone": false, 00:27:58.722 "esnap_clone": false 00:27:58.722 } 00:27:58.722 } 00:27:58.722 } 00:27:58.722 ]' 00:27:58.722 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:58.722 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:58.722 11:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:58.722 11:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:27:58.722 11:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:27:58.722 11:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:27:58.722 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:58.722 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:58.722 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:58.981 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:58.981 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:58.981 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:59.240 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:59.240 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:59.240 11:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f5f6f2d0-2233-4df7-bacb-8e439ccd1c62 -c cachen1p0 --l2p_dram_limit 2 00:27:59.499 [2024-11-15 11:31:36.708494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.499 [2024-11-15 11:31:36.708573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:59.499 [2024-11-15 11:31:36.708596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:59.499 [2024-11-15 11:31:36.708609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.499 [2024-11-15 11:31:36.708674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.499 [2024-11-15 11:31:36.708690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:59.499 [2024-11-15 11:31:36.708706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:59.499 [2024-11-15 11:31:36.708718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.499 [2024-11-15 11:31:36.708746] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:59.499 [2024-11-15 11:31:36.709810] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:59.499 [2024-11-15 11:31:36.709858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.499 [2024-11-15 11:31:36.709872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:59.499 [2024-11-15 11:31:36.709887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:27:59.499 [2024-11-15 11:31:36.709899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.499 [2024-11-15 11:31:36.709984] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 47b180a1-4151-470a-a496-2e1e525ea212 00:27:59.499 [2024-11-15 11:31:36.711489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.499 [2024-11-15 11:31:36.711730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:59.499 [2024-11-15 11:31:36.711756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:59.499 [2024-11-15 11:31:36.711770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.499 [2024-11-15 11:31:36.719267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.499 [2024-11-15 11:31:36.719475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:59.499 [2024-11-15 11:31:36.719499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.454 ms 00:27:59.499 [2024-11-15 11:31:36.719514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.499 [2024-11-15 11:31:36.719590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.499 [2024-11-15 11:31:36.719609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:59.499 [2024-11-15 11:31:36.719622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:59.499 [2024-11-15 11:31:36.719639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.499 [2024-11-15 11:31:36.719702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.499 [2024-11-15 11:31:36.719720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:59.500 [2024-11-15 11:31:36.719732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:59.500 [2024-11-15 11:31:36.719752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.500 [2024-11-15 11:31:36.719780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:59.500 [2024-11-15 11:31:36.724847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.500 [2024-11-15 11:31:36.724885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:59.500 [2024-11-15 11:31:36.724905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.079 ms 00:27:59.500 [2024-11-15 11:31:36.724917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.500 [2024-11-15 11:31:36.724950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.500 [2024-11-15 11:31:36.724963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:59.500 [2024-11-15 11:31:36.724977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:59.500 [2024-11-15 11:31:36.724990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.500 [2024-11-15 11:31:36.725038] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:59.500 [2024-11-15 11:31:36.725178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:59.500 [2024-11-15 11:31:36.725201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:59.500 [2024-11-15 11:31:36.725218] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:59.500 [2024-11-15 11:31:36.725237] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:59.500 [2024-11-15 11:31:36.725250] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:59.500 [2024-11-15 11:31:36.725267] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:59.500 [2024-11-15 11:31:36.725278] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:59.500 [2024-11-15 11:31:36.725297] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:59.500 [2024-11-15 11:31:36.725308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:59.500 [2024-11-15 11:31:36.725322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.500 [2024-11-15 11:31:36.725335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:59.500 [2024-11-15 11:31:36.725349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:27:59.500 [2024-11-15 11:31:36.725360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.500 [2024-11-15 11:31:36.725444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.500 [2024-11-15 11:31:36.725458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:59.500 [2024-11-15 11:31:36.725474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:27:59.500 [2024-11-15 11:31:36.725498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.500 [2024-11-15 11:31:36.725629] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:59.500 [2024-11-15 11:31:36.725645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:59.500 [2024-11-15 11:31:36.725662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:59.500 [2024-11-15 11:31:36.725674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.725688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:59.500 [2024-11-15 11:31:36.725699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.725713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:59.500 [2024-11-15 11:31:36.725725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:59.500 [2024-11-15 11:31:36.725739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:59.500 [2024-11-15 11:31:36.725749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.725762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:59.500 [2024-11-15 11:31:36.725774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:59.500 [2024-11-15 11:31:36.725787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.725798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:59.500 [2024-11-15 11:31:36.725811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:59.500 [2024-11-15 11:31:36.725820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.725836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:59.500 [2024-11-15 11:31:36.725847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:59.500 [2024-11-15 11:31:36.725862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.725872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:59.500 [2024-11-15 11:31:36.725885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:59.500 [2024-11-15 11:31:36.725896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.500 [2024-11-15 11:31:36.725910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:59.500 [2024-11-15 11:31:36.725921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:59.500 [2024-11-15 11:31:36.725934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.500 [2024-11-15 11:31:36.725944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:59.500 [2024-11-15 11:31:36.725958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:59.500 [2024-11-15 11:31:36.725969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.500 [2024-11-15 11:31:36.725983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:59.500 [2024-11-15 11:31:36.725994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:59.500 [2024-11-15 11:31:36.726006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.500 [2024-11-15 11:31:36.726017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:59.500 [2024-11-15 11:31:36.726033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:59.500 [2024-11-15 11:31:36.726043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.726055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:59.500 [2024-11-15 11:31:36.726065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:59.500 [2024-11-15 11:31:36.726078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.726088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:59.500 [2024-11-15 11:31:36.726101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:59.500 [2024-11-15 11:31:36.726111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.726125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:59.500 [2024-11-15 11:31:36.726135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:59.500 [2024-11-15 11:31:36.726147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.726158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:59.500 [2024-11-15 11:31:36.726171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:59.500 [2024-11-15 11:31:36.726193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:59.500 [2024-11-15 11:31:36.726209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.500 [2024-11-15 11:31:36.726222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:59.500 [2024-11-15 11:31:36.726238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:59.500 [2024-11-15 11:31:36.726249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:59.500 [2024-11-15 11:31:36.726262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:59.500 [2024-11-15 11:31:36.726273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:59.500 [2024-11-15 11:31:36.726286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:59.500 [2024-11-15 11:31:36.726302] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:59.500 [2024-11-15 11:31:36.726319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:59.500 [2024-11-15 11:31:36.726350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:59.500 [2024-11-15 11:31:36.726387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:59.500 [2024-11-15 11:31:36.726402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:59.500 [2024-11-15 11:31:36.726414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:59.500 [2024-11-15 11:31:36.726429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:59.500 [2024-11-15 11:31:36.726510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:59.500 [2024-11-15 11:31:36.726522] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:59.501 [2024-11-15 11:31:36.726537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.501 [2024-11-15 11:31:36.726550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:59.501 [2024-11-15 11:31:36.726943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:59.501 [2024-11-15 11:31:36.727004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:59.501 [2024-11-15 11:31:36.727061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:59.501 [2024-11-15 11:31:36.727167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.501 [2024-11-15 11:31:36.727213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:59.501 [2024-11-15 11:31:36.727248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.630 ms 00:27:59.501 [2024-11-15 11:31:36.727286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.501 [2024-11-15 11:31:36.727368] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:59.501 [2024-11-15 11:31:36.727492] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:04.767 [2024-11-15 11:31:42.101795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.767 [2024-11-15 11:31:42.102044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:04.767 [2024-11-15 11:31:42.102144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5383.152 ms 00:28:04.767 [2024-11-15 11:31:42.102205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.767 [2024-11-15 11:31:42.144654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.767 [2024-11-15 11:31:42.144893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:04.767 [2024-11-15 11:31:42.145052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.125 ms 00:28:04.767 [2024-11-15 11:31:42.145097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.767 [2024-11-15 11:31:42.145218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.767 [2024-11-15 11:31:42.145492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:04.767 [2024-11-15 11:31:42.145538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:04.767 [2024-11-15 11:31:42.145615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.198680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.198878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:05.026 [2024-11-15 11:31:42.198902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.980 ms 00:28:05.026 [2024-11-15 11:31:42.198920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.198955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.198975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:05.026 [2024-11-15 11:31:42.198987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:05.026 [2024-11-15 11:31:42.199001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.199485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.199505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:05.026 [2024-11-15 11:31:42.199517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:28:05.026 [2024-11-15 11:31:42.199532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.199615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.199632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:05.026 [2024-11-15 11:31:42.199647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:05.026 [2024-11-15 11:31:42.199663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.220647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.220695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:05.026 [2024-11-15 11:31:42.220710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.997 ms 00:28:05.026 [2024-11-15 11:31:42.220725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.244689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:05.026 [2024-11-15 11:31:42.245923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.245961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:05.026 [2024-11-15 11:31:42.245984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.151 ms 00:28:05.026 [2024-11-15 11:31:42.245999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.290685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.290726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:05.026 [2024-11-15 11:31:42.290746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.716 ms 00:28:05.026 [2024-11-15 11:31:42.290757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.290835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.290851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:05.026 [2024-11-15 11:31:42.290869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:05.026 [2024-11-15 11:31:42.290881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.329327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.329368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:05.026 [2024-11-15 11:31:42.329386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.450 ms 00:28:05.026 [2024-11-15 11:31:42.329398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.367659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.367697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:05.026 [2024-11-15 11:31:42.367715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.286 ms 00:28:05.026 [2024-11-15 11:31:42.367725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.026 [2024-11-15 11:31:42.368498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.026 [2024-11-15 11:31:42.368519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:05.026 [2024-11-15 11:31:42.368534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.746 ms 00:28:05.026 [2024-11-15 11:31:42.368548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.285 [2024-11-15 11:31:42.502974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.285 [2024-11-15 11:31:42.503017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:05.285 [2024-11-15 11:31:42.503040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 134.573 ms 00:28:05.285 [2024-11-15 11:31:42.503051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.285 [2024-11-15 11:31:42.544077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.285 [2024-11-15 11:31:42.544122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:05.285 [2024-11-15 11:31:42.544152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.003 ms 00:28:05.285 [2024-11-15 11:31:42.544164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.285 [2024-11-15 11:31:42.584437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.285 [2024-11-15 11:31:42.584480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:05.285 [2024-11-15 11:31:42.584498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.288 ms 00:28:05.285 [2024-11-15 11:31:42.584510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.285 [2024-11-15 11:31:42.625068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.285 [2024-11-15 11:31:42.625110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:05.285 [2024-11-15 11:31:42.625128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.572 ms 00:28:05.285 [2024-11-15 11:31:42.625140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.285 [2024-11-15 11:31:42.625192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.285 [2024-11-15 11:31:42.625206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:05.285 [2024-11-15 11:31:42.625224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:05.285 [2024-11-15 11:31:42.625236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.285 [2024-11-15 11:31:42.625346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.285 [2024-11-15 11:31:42.625361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:05.285 [2024-11-15 11:31:42.625380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:05.285 [2024-11-15 11:31:42.625391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.285 [2024-11-15 11:31:42.626710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5927.159 ms, result 0 00:28:05.285 { 00:28:05.285 "name": "ftl", 00:28:05.285 "uuid": "47b180a1-4151-470a-a496-2e1e525ea212" 00:28:05.285 } 00:28:05.285 11:31:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:05.544 [2024-11-15 11:31:42.853282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.544 11:31:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:05.802 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:06.074 [2024-11-15 11:31:43.280968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:06.075 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:06.346 [2024-11-15 11:31:43.495112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:06.346 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:06.605 Fill FTL, iteration 1 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80735 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80735 /var/tmp/spdk.tgt.sock 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80735 ']' 00:28:06.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:06.605 11:31:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:06.605 [2024-11-15 11:31:43.990997] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:28:06.605 [2024-11-15 11:31:43.991777] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80735 ] 00:28:06.864 [2024-11-15 11:31:44.180653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.122 [2024-11-15 11:31:44.342267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.060 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.060 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:08.060 11:31:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:08.319 ftln1 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80735 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80735 ']' 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80735 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80735 00:28:08.578 killing process with pid 80735 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80735' 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80735 00:28:08.578 11:31:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80735 00:28:11.112 11:31:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:11.112 11:31:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:11.372 [2024-11-15 11:31:48.597999] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:28:11.372 [2024-11-15 11:31:48.598140] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80797 ] 00:28:11.630 [2024-11-15 11:31:48.782720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.630 [2024-11-15 11:31:48.926884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.536  [2024-11-15T11:31:51.505Z] Copying: 248/1024 [MB] (248 MBps) [2024-11-15T11:31:52.883Z] Copying: 502/1024 [MB] (254 MBps) [2024-11-15T11:31:53.462Z] Copying: 758/1024 [MB] (256 MBps) [2024-11-15T11:31:53.758Z] Copying: 1014/1024 [MB] (256 MBps) [2024-11-15T11:31:55.137Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:28:17.736 00:28:17.736 Calculate MD5 checksum, iteration 1 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:17.736 11:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:17.736 [2024-11-15 11:31:54.843001] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:28:17.736 [2024-11-15 11:31:54.843306] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80862 ] 00:28:17.736 [2024-11-15 11:31:55.031061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.995 [2024-11-15 11:31:55.177726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.373  [2024-11-15T11:31:57.711Z] Copying: 622/1024 [MB] (622 MBps) [2024-11-15T11:31:58.646Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:28:21.245 00:28:21.245 11:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:21.245 11:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:23.147 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:23.147 Fill FTL, iteration 2 00:28:23.147 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c0261b06b4143385cd08b1efb2a2418c 00:28:23.147 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:23.147 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:23.147 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:23.148 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:23.148 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:23.148 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:23.148 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:23.148 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:23.148 11:32:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:23.148 [2024-11-15 11:32:00.284180] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:28:23.148 [2024-11-15 11:32:00.284800] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80921 ] 00:28:23.148 [2024-11-15 11:32:00.473148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.405 [2024-11-15 11:32:00.621827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.780  [2024-11-15T11:32:03.557Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-15T11:32:04.561Z] Copying: 485/1024 [MB] (238 MBps) [2024-11-15T11:32:05.496Z] Copying: 712/1024 [MB] (227 MBps) [2024-11-15T11:32:05.755Z] Copying: 922/1024 [MB] (210 MBps) [2024-11-15T11:32:07.130Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:28:29.729 00:28:29.729 Calculate MD5 checksum, iteration 2 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:29.729 11:32:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:29.729 [2024-11-15 11:32:06.998494] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:28:29.729 [2024-11-15 11:32:06.998859] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80992 ] 00:28:29.988 [2024-11-15 11:32:07.191454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.988 [2024-11-15 11:32:07.314644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.889  [2024-11-15T11:32:09.857Z] Copying: 596/1024 [MB] (596 MBps) [2024-11-15T11:32:12.387Z] Copying: 1024/1024 [MB] (average 614 MBps) 00:28:34.986 00:28:34.986 11:32:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:34.986 11:32:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:36.364 11:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:36.364 11:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=12ad3090ac7b2300e18cb8f0a8f67f8c 00:28:36.364 11:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:36.364 11:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:36.364 11:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:36.624 [2024-11-15 11:32:13.951993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.624 [2024-11-15 11:32:13.952051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:36.624 [2024-11-15 11:32:13.952068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:36.624 [2024-11-15 11:32:13.952079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.624 [2024-11-15 11:32:13.952122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.624 [2024-11-15 11:32:13.952133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:36.624 [2024-11-15 11:32:13.952149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:36.624 [2024-11-15 11:32:13.952159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.624 [2024-11-15 11:32:13.952180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.624 [2024-11-15 11:32:13.952192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:36.624 [2024-11-15 11:32:13.952203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:36.624 [2024-11-15 11:32:13.952214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.624 [2024-11-15 11:32:13.952274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.277 ms, result 0 00:28:36.624 true 00:28:36.624 11:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:36.884 { 00:28:36.884 "name": "ftl", 00:28:36.884 "properties": [ 00:28:36.884 { 00:28:36.884 "name": "superblock_version", 00:28:36.884 "value": 5, 00:28:36.884 "read-only": true 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "name": "base_device", 00:28:36.884 "bands": [ 00:28:36.884 { 00:28:36.884 "id": 0, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 1, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 2, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 3, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 4, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 5, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 6, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 7, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 8, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 9, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 10, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 11, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 12, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 13, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 14, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 15, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 16, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 17, 00:28:36.884 "state": "FREE", 00:28:36.884 "validity": 0.0 00:28:36.884 } 00:28:36.884 ], 00:28:36.884 "read-only": true 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "name": "cache_device", 00:28:36.884 "type": "bdev", 00:28:36.884 "chunks": [ 00:28:36.884 { 00:28:36.884 "id": 0, 00:28:36.884 "state": "INACTIVE", 00:28:36.884 "utilization": 0.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 1, 00:28:36.884 "state": "CLOSED", 00:28:36.884 "utilization": 1.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 2, 00:28:36.884 "state": "CLOSED", 00:28:36.884 "utilization": 1.0 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 3, 00:28:36.884 "state": "OPEN", 00:28:36.884 "utilization": 0.001953125 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "id": 4, 00:28:36.884 "state": "OPEN", 00:28:36.884 "utilization": 0.0 00:28:36.884 } 00:28:36.884 ], 00:28:36.884 "read-only": true 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "name": "verbose_mode", 00:28:36.884 "value": true, 00:28:36.884 "unit": "", 00:28:36.884 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:36.884 }, 00:28:36.884 { 00:28:36.884 "name": "prep_upgrade_on_shutdown", 00:28:36.884 "value": false, 00:28:36.884 "unit": "", 00:28:36.884 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:36.884 } 00:28:36.884 ] 00:28:36.884 } 00:28:36.884 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:37.144 [2024-11-15 11:32:14.391757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.144 [2024-11-15 11:32:14.391813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:37.144 [2024-11-15 11:32:14.391830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:37.144 [2024-11-15 11:32:14.391842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.144 [2024-11-15 11:32:14.391884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.144 [2024-11-15 11:32:14.391896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:37.144 [2024-11-15 11:32:14.391907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:37.144 [2024-11-15 11:32:14.391918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.144 [2024-11-15 11:32:14.391941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.144 [2024-11-15 11:32:14.391964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:37.144 [2024-11-15 11:32:14.391976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:37.144 [2024-11-15 11:32:14.391985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.144 [2024-11-15 11:32:14.392045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.293 ms, result 0 00:28:37.144 true 00:28:37.144 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:37.144 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:37.145 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:37.404 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:37.404 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:37.404 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:37.664 [2024-11-15 11:32:14.843429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.664 [2024-11-15 11:32:14.843495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:37.664 [2024-11-15 11:32:14.843519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:37.664 [2024-11-15 11:32:14.843530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.664 [2024-11-15 11:32:14.843575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.664 [2024-11-15 11:32:14.843587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:37.664 [2024-11-15 11:32:14.843598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:37.664 [2024-11-15 11:32:14.843608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.664 [2024-11-15 11:32:14.843629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.664 [2024-11-15 11:32:14.843639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:37.664 [2024-11-15 11:32:14.843650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:37.664 [2024-11-15 11:32:14.843660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.664 [2024-11-15 11:32:14.843740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.301 ms, result 0 00:28:37.664 true 00:28:37.664 11:32:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:37.924 { 00:28:37.924 "name": "ftl", 00:28:37.924 "properties": [ 00:28:37.924 { 00:28:37.924 "name": "superblock_version", 00:28:37.924 "value": 5, 00:28:37.924 "read-only": true 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "name": "base_device", 00:28:37.924 "bands": [ 00:28:37.924 { 00:28:37.924 "id": 0, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 1, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 2, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 3, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 4, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 5, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 6, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 7, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 8, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 9, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 10, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 11, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 12, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 13, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 14, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 15, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 16, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 17, 00:28:37.924 "state": "FREE", 00:28:37.924 "validity": 0.0 00:28:37.924 } 00:28:37.924 ], 00:28:37.924 "read-only": true 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "name": "cache_device", 00:28:37.924 "type": "bdev", 00:28:37.924 "chunks": [ 00:28:37.924 { 00:28:37.924 "id": 0, 00:28:37.924 "state": "INACTIVE", 00:28:37.924 "utilization": 0.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 1, 00:28:37.924 "state": "CLOSED", 00:28:37.924 "utilization": 1.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 2, 00:28:37.924 "state": "CLOSED", 00:28:37.924 "utilization": 1.0 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 3, 00:28:37.924 "state": "OPEN", 00:28:37.924 "utilization": 0.001953125 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "id": 4, 00:28:37.924 "state": "OPEN", 00:28:37.924 "utilization": 0.0 00:28:37.924 } 00:28:37.924 ], 00:28:37.924 "read-only": true 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "name": "verbose_mode", 00:28:37.924 "value": true, 00:28:37.924 "unit": "", 00:28:37.924 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:37.924 }, 00:28:37.924 { 00:28:37.924 "name": "prep_upgrade_on_shutdown", 00:28:37.924 "value": true, 00:28:37.924 "unit": "", 00:28:37.924 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:37.924 } 00:28:37.924 ] 00:28:37.924 } 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80591 ]] 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80591 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80591 ']' 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80591 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80591 00:28:37.924 killing process with pid 80591 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80591' 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80591 00:28:37.924 11:32:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80591 00:28:38.862 [2024-11-15 11:32:16.258257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:39.121 [2024-11-15 11:32:16.278058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.121 [2024-11-15 11:32:16.278100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:39.121 [2024-11-15 11:32:16.278116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:39.121 [2024-11-15 11:32:16.278127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.121 [2024-11-15 11:32:16.278149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:39.121 [2024-11-15 11:32:16.282441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.121 [2024-11-15 11:32:16.282471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:39.121 [2024-11-15 11:32:16.282483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.282 ms 00:28:39.121 [2024-11-15 11:32:16.282493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.603653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.603717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:47.293 [2024-11-15 11:32:23.603735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7333.013 ms 00:28:47.293 [2024-11-15 11:32:23.603751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.604932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.604957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:47.293 [2024-11-15 11:32:23.604969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.164 ms 00:28:47.293 [2024-11-15 11:32:23.604980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.605915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.606131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:47.293 [2024-11-15 11:32:23.606154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.905 ms 00:28:47.293 [2024-11-15 11:32:23.606181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.621736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.621776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:47.293 [2024-11-15 11:32:23.621790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.537 ms 00:28:47.293 [2024-11-15 11:32:23.621800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.631322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.631362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:47.293 [2024-11-15 11:32:23.631376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.500 ms 00:28:47.293 [2024-11-15 11:32:23.631387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.631468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.631481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:47.293 [2024-11-15 11:32:23.631499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:28:47.293 [2024-11-15 11:32:23.631510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.645777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.645811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:47.293 [2024-11-15 11:32:23.645824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.271 ms 00:28:47.293 [2024-11-15 11:32:23.645833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.660399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.660584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:47.293 [2024-11-15 11:32:23.660604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.556 ms 00:28:47.293 [2024-11-15 11:32:23.660615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.674877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.674909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:47.293 [2024-11-15 11:32:23.674922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.245 ms 00:28:47.293 [2024-11-15 11:32:23.674931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.689109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.689143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:47.293 [2024-11-15 11:32:23.689154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.125 ms 00:28:47.293 [2024-11-15 11:32:23.689163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.689196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:47.293 [2024-11-15 11:32:23.689211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:47.293 [2024-11-15 11:32:23.689223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:47.293 [2024-11-15 11:32:23.689246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:47.293 [2024-11-15 11:32:23.689257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:47.293 [2024-11-15 11:32:23.689433] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:47.293 [2024-11-15 11:32:23.689443] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 47b180a1-4151-470a-a496-2e1e525ea212 00:28:47.293 [2024-11-15 11:32:23.689454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:47.293 [2024-11-15 11:32:23.689463] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:47.293 [2024-11-15 11:32:23.689473] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:47.293 [2024-11-15 11:32:23.689484] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:47.293 [2024-11-15 11:32:23.689493] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:47.293 [2024-11-15 11:32:23.689508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:47.293 [2024-11-15 11:32:23.689518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:47.293 [2024-11-15 11:32:23.689528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:47.293 [2024-11-15 11:32:23.689539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:47.293 [2024-11-15 11:32:23.689549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.689563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:47.293 [2024-11-15 11:32:23.689588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.354 ms 00:28:47.293 [2024-11-15 11:32:23.689600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.709415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.709605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:47.293 [2024-11-15 11:32:23.709626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.799 ms 00:28:47.293 [2024-11-15 11:32:23.709645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.710219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.293 [2024-11-15 11:32:23.710234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:47.293 [2024-11-15 11:32:23.710247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:28:47.293 [2024-11-15 11:32:23.710258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.774622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.293 [2024-11-15 11:32:23.774661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:47.293 [2024-11-15 11:32:23.774680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.293 [2024-11-15 11:32:23.774690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.774721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.293 [2024-11-15 11:32:23.774731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:47.293 [2024-11-15 11:32:23.774742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.293 [2024-11-15 11:32:23.774753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.774830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.293 [2024-11-15 11:32:23.774844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:47.293 [2024-11-15 11:32:23.774855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.293 [2024-11-15 11:32:23.774876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.774902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.293 [2024-11-15 11:32:23.774915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:47.293 [2024-11-15 11:32:23.774926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.293 [2024-11-15 11:32:23.774936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.293 [2024-11-15 11:32:23.896742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.293 [2024-11-15 11:32:23.896797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:47.294 [2024-11-15 11:32:23.896817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.896826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.996768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.294 [2024-11-15 11:32:23.996825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:47.294 [2024-11-15 11:32:23.996839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.996850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.996957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.294 [2024-11-15 11:32:23.996971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:47.294 [2024-11-15 11:32:23.996982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.996993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.997049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.294 [2024-11-15 11:32:23.997062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:47.294 [2024-11-15 11:32:23.997072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.997083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.997203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.294 [2024-11-15 11:32:23.997217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:47.294 [2024-11-15 11:32:23.997229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.997240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.997287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.294 [2024-11-15 11:32:23.997305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:47.294 [2024-11-15 11:32:23.997315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.997326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.997366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.294 [2024-11-15 11:32:23.997378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:47.294 [2024-11-15 11:32:23.997388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.997398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.997444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:47.294 [2024-11-15 11:32:23.997456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:47.294 [2024-11-15 11:32:23.997467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:47.294 [2024-11-15 11:32:23.997477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.294 [2024-11-15 11:32:23.997620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7732.048 ms, result 0 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81201 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81201 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81201 ']' 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:51.540 11:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:51.540 [2024-11-15 11:32:28.748202] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:28:51.540 [2024-11-15 11:32:28.749009] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81201 ] 00:28:51.540 [2024-11-15 11:32:28.934267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.799 [2024-11-15 11:32:29.045680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.736 [2024-11-15 11:32:30.025709] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:52.736 [2024-11-15 11:32:30.025783] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:52.997 [2024-11-15 11:32:30.172685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.172733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:52.997 [2024-11-15 11:32:30.172750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:52.997 [2024-11-15 11:32:30.172761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.172818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.172832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:52.997 [2024-11-15 11:32:30.172843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:52.997 [2024-11-15 11:32:30.172853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.172877] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:52.997 [2024-11-15 11:32:30.173864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:52.997 [2024-11-15 11:32:30.173897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.173909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:52.997 [2024-11-15 11:32:30.173921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.026 ms 00:28:52.997 [2024-11-15 11:32:30.173933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.175376] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:52.997 [2024-11-15 11:32:30.194751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.194790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:52.997 [2024-11-15 11:32:30.194812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.407 ms 00:28:52.997 [2024-11-15 11:32:30.194823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.194885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.194899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:52.997 [2024-11-15 11:32:30.194911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:52.997 [2024-11-15 11:32:30.194922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.201661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.201846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:52.997 [2024-11-15 11:32:30.201867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.669 ms 00:28:52.997 [2024-11-15 11:32:30.201878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.201948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.201962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:52.997 [2024-11-15 11:32:30.201974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:52.997 [2024-11-15 11:32:30.201986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.202031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.202043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:52.997 [2024-11-15 11:32:30.202057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:52.997 [2024-11-15 11:32:30.202068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.202093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:52.997 [2024-11-15 11:32:30.206907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.206941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:52.997 [2024-11-15 11:32:30.206953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.826 ms 00:28:52.997 [2024-11-15 11:32:30.206968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.206995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.207007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:52.997 [2024-11-15 11:32:30.207018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:52.997 [2024-11-15 11:32:30.207028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.207085] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:52.997 [2024-11-15 11:32:30.207110] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:52.997 [2024-11-15 11:32:30.207150] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:52.997 [2024-11-15 11:32:30.207168] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:52.997 [2024-11-15 11:32:30.207259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:52.997 [2024-11-15 11:32:30.207272] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:52.997 [2024-11-15 11:32:30.207287] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:52.997 [2024-11-15 11:32:30.207301] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:52.997 [2024-11-15 11:32:30.207313] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:52.997 [2024-11-15 11:32:30.207328] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:52.997 [2024-11-15 11:32:30.207339] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:52.997 [2024-11-15 11:32:30.207349] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:52.997 [2024-11-15 11:32:30.207359] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:52.997 [2024-11-15 11:32:30.207370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.207381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:52.997 [2024-11-15 11:32:30.207391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:28:52.997 [2024-11-15 11:32:30.207403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.207476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.997 [2024-11-15 11:32:30.207487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:52.997 [2024-11-15 11:32:30.207498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:28:52.997 [2024-11-15 11:32:30.207512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.997 [2024-11-15 11:32:30.207624] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:52.997 [2024-11-15 11:32:30.207640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:52.997 [2024-11-15 11:32:30.207652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:52.997 [2024-11-15 11:32:30.207664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.997 [2024-11-15 11:32:30.207675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:52.997 [2024-11-15 11:32:30.207685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:52.997 [2024-11-15 11:32:30.207695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:52.997 [2024-11-15 11:32:30.207705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:52.997 [2024-11-15 11:32:30.207715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:52.997 [2024-11-15 11:32:30.207728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.997 [2024-11-15 11:32:30.207739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:52.997 [2024-11-15 11:32:30.207749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:52.997 [2024-11-15 11:32:30.207758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.997 [2024-11-15 11:32:30.207794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:52.997 [2024-11-15 11:32:30.207807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:52.997 [2024-11-15 11:32:30.207818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.997 [2024-11-15 11:32:30.207827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:52.997 [2024-11-15 11:32:30.207837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:52.997 [2024-11-15 11:32:30.207846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.997 [2024-11-15 11:32:30.207856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:52.997 [2024-11-15 11:32:30.207866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:52.997 [2024-11-15 11:32:30.207877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.997 [2024-11-15 11:32:30.207886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:52.997 [2024-11-15 11:32:30.207895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:52.997 [2024-11-15 11:32:30.207904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.997 [2024-11-15 11:32:30.207924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:52.997 [2024-11-15 11:32:30.207933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:52.997 [2024-11-15 11:32:30.207943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.997 [2024-11-15 11:32:30.207951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:52.997 [2024-11-15 11:32:30.207961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:52.997 [2024-11-15 11:32:30.207970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.997 [2024-11-15 11:32:30.207981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:52.997 [2024-11-15 11:32:30.207990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:52.997 [2024-11-15 11:32:30.207999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.998 [2024-11-15 11:32:30.208009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:52.998 [2024-11-15 11:32:30.208018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:52.998 [2024-11-15 11:32:30.208027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.998 [2024-11-15 11:32:30.208036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:52.998 [2024-11-15 11:32:30.208045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:52.998 [2024-11-15 11:32:30.208055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.998 [2024-11-15 11:32:30.208065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:52.998 [2024-11-15 11:32:30.208075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:52.998 [2024-11-15 11:32:30.208084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.998 [2024-11-15 11:32:30.208093] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:52.998 [2024-11-15 11:32:30.208103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:52.998 [2024-11-15 11:32:30.208112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:52.998 [2024-11-15 11:32:30.208122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.998 [2024-11-15 11:32:30.208136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:52.998 [2024-11-15 11:32:30.208146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:52.998 [2024-11-15 11:32:30.208155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:52.998 [2024-11-15 11:32:30.208164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:52.998 [2024-11-15 11:32:30.208174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:52.998 [2024-11-15 11:32:30.208185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:52.998 [2024-11-15 11:32:30.208196] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:52.998 [2024-11-15 11:32:30.208209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:52.998 [2024-11-15 11:32:30.208234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:52.998 [2024-11-15 11:32:30.208269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:52.998 [2024-11-15 11:32:30.208280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:52.998 [2024-11-15 11:32:30.208291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:52.998 [2024-11-15 11:32:30.208302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:52.998 [2024-11-15 11:32:30.208373] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:52.998 [2024-11-15 11:32:30.208384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:52.998 [2024-11-15 11:32:30.208407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:52.998 [2024-11-15 11:32:30.208419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:52.998 [2024-11-15 11:32:30.208430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:52.998 [2024-11-15 11:32:30.208443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.998 [2024-11-15 11:32:30.208453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:52.998 [2024-11-15 11:32:30.208464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:28:52.998 [2024-11-15 11:32:30.208475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.998 [2024-11-15 11:32:30.208523] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:52.998 [2024-11-15 11:32:30.208537] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:57.192 [2024-11-15 11:32:33.996039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:33.996098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:57.192 [2024-11-15 11:32:33.996117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3793.665 ms 00:28:57.192 [2024-11-15 11:32:33.996128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.034353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.034401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:57.192 [2024-11-15 11:32:34.034418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.998 ms 00:28:57.192 [2024-11-15 11:32:34.034429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.034537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.034570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:57.192 [2024-11-15 11:32:34.034583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:57.192 [2024-11-15 11:32:34.034594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.077964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.078157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:57.192 [2024-11-15 11:32:34.078188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.374 ms 00:28:57.192 [2024-11-15 11:32:34.078204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.078263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.078274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:57.192 [2024-11-15 11:32:34.078286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:57.192 [2024-11-15 11:32:34.078297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.078802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.078817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:57.192 [2024-11-15 11:32:34.078829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:28:57.192 [2024-11-15 11:32:34.078839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.078890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.078902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:57.192 [2024-11-15 11:32:34.078912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:57.192 [2024-11-15 11:32:34.078922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.099793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.099838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:57.192 [2024-11-15 11:32:34.099853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.880 ms 00:28:57.192 [2024-11-15 11:32:34.099865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.127758] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:57.192 [2024-11-15 11:32:34.127799] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:57.192 [2024-11-15 11:32:34.127814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.127826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:57.192 [2024-11-15 11:32:34.127838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.864 ms 00:28:57.192 [2024-11-15 11:32:34.127848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.147703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.147742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:57.192 [2024-11-15 11:32:34.147758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.833 ms 00:28:57.192 [2024-11-15 11:32:34.147770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.165476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.165512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:57.192 [2024-11-15 11:32:34.165525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.686 ms 00:28:57.192 [2024-11-15 11:32:34.165535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.192 [2024-11-15 11:32:34.183397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.192 [2024-11-15 11:32:34.183433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:57.192 [2024-11-15 11:32:34.183447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.837 ms 00:28:57.193 [2024-11-15 11:32:34.183456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.184297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.184334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:57.193 [2024-11-15 11:32:34.184347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.703 ms 00:28:57.193 [2024-11-15 11:32:34.184357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.272743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.272808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:57.193 [2024-11-15 11:32:34.272823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 88.505 ms 00:28:57.193 [2024-11-15 11:32:34.272833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.283647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:57.193 [2024-11-15 11:32:34.284468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.284498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:57.193 [2024-11-15 11:32:34.284512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.606 ms 00:28:57.193 [2024-11-15 11:32:34.284523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.284618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.284636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:57.193 [2024-11-15 11:32:34.284648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:57.193 [2024-11-15 11:32:34.284658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.284736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.284749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:57.193 [2024-11-15 11:32:34.284761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:57.193 [2024-11-15 11:32:34.284770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.284797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.284808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:57.193 [2024-11-15 11:32:34.284822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:57.193 [2024-11-15 11:32:34.284832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.284867] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:57.193 [2024-11-15 11:32:34.284879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.284890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:57.193 [2024-11-15 11:32:34.284900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:57.193 [2024-11-15 11:32:34.284910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.321478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.321522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:57.193 [2024-11-15 11:32:34.321537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.602 ms 00:28:57.193 [2024-11-15 11:32:34.321548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.321644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.193 [2024-11-15 11:32:34.321658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:57.193 [2024-11-15 11:32:34.321669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:57.193 [2024-11-15 11:32:34.321680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.193 [2024-11-15 11:32:34.322817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4156.407 ms, result 0 00:28:57.193 [2024-11-15 11:32:34.337853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.193 [2024-11-15 11:32:34.353845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:57.193 [2024-11-15 11:32:34.363160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:57.453 11:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:57.453 11:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:57.453 11:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:57.453 11:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:57.453 11:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:57.712 [2024-11-15 11:32:34.966575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.712 [2024-11-15 11:32:34.966780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:57.712 [2024-11-15 11:32:34.966884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:57.712 [2024-11-15 11:32:34.966934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.712 [2024-11-15 11:32:34.967005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.712 [2024-11-15 11:32:34.967044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:57.712 [2024-11-15 11:32:34.967141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:57.712 [2024-11-15 11:32:34.967182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.712 [2024-11-15 11:32:34.967236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:57.712 [2024-11-15 11:32:34.967272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:57.712 [2024-11-15 11:32:34.967356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:57.712 [2024-11-15 11:32:34.967444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:57.712 [2024-11-15 11:32:34.967571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.994 ms, result 0 00:28:57.712 true 00:28:57.712 11:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:57.970 { 00:28:57.970 "name": "ftl", 00:28:57.970 "properties": [ 00:28:57.970 { 00:28:57.970 "name": "superblock_version", 00:28:57.970 "value": 5, 00:28:57.970 "read-only": true 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "name": "base_device", 00:28:57.970 "bands": [ 00:28:57.970 { 00:28:57.970 "id": 0, 00:28:57.970 "state": "CLOSED", 00:28:57.970 "validity": 1.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 1, 00:28:57.970 "state": "CLOSED", 00:28:57.970 "validity": 1.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 2, 00:28:57.970 "state": "CLOSED", 00:28:57.970 "validity": 0.007843137254901933 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 3, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 4, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 5, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 6, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 7, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 8, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 9, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 10, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 11, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 12, 00:28:57.970 "state": "FREE", 00:28:57.970 "validity": 0.0 00:28:57.970 }, 00:28:57.970 { 00:28:57.970 "id": 13, 00:28:57.970 "state": "FREE", 00:28:57.971 "validity": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 14, 00:28:57.971 "state": "FREE", 00:28:57.971 "validity": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 15, 00:28:57.971 "state": "FREE", 00:28:57.971 "validity": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 16, 00:28:57.971 "state": "FREE", 00:28:57.971 "validity": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 17, 00:28:57.971 "state": "FREE", 00:28:57.971 "validity": 0.0 00:28:57.971 } 00:28:57.971 ], 00:28:57.971 "read-only": true 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "name": "cache_device", 00:28:57.971 "type": "bdev", 00:28:57.971 "chunks": [ 00:28:57.971 { 00:28:57.971 "id": 0, 00:28:57.971 "state": "INACTIVE", 00:28:57.971 "utilization": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 1, 00:28:57.971 "state": "OPEN", 00:28:57.971 "utilization": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 2, 00:28:57.971 "state": "OPEN", 00:28:57.971 "utilization": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 3, 00:28:57.971 "state": "FREE", 00:28:57.971 "utilization": 0.0 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "id": 4, 00:28:57.971 "state": "FREE", 00:28:57.971 "utilization": 0.0 00:28:57.971 } 00:28:57.971 ], 00:28:57.971 "read-only": true 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "name": "verbose_mode", 00:28:57.971 "value": true, 00:28:57.971 "unit": "", 00:28:57.971 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:57.971 }, 00:28:57.971 { 00:28:57.971 "name": "prep_upgrade_on_shutdown", 00:28:57.971 "value": false, 00:28:57.971 "unit": "", 00:28:57.971 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:57.971 } 00:28:57.971 ] 00:28:57.971 } 00:28:57.971 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:57.971 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:57.971 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:58.230 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:58.230 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:58.230 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:58.230 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:58.230 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:58.489 Validate MD5 checksum, iteration 1 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:58.489 11:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:58.489 [2024-11-15 11:32:35.827355] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:28:58.489 [2024-11-15 11:32:35.827674] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81290 ] 00:28:58.747 [2024-11-15 11:32:36.011054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.006 [2024-11-15 11:32:36.157011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.911  [2024-11-15T11:32:38.570Z] Copying: 688/1024 [MB] (688 MBps) [2024-11-15T11:32:39.947Z] Copying: 1024/1024 [MB] (average 682 MBps) 00:29:02.546 00:29:02.546 11:32:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:02.546 11:32:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:04.459 Validate MD5 checksum, iteration 2 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c0261b06b4143385cd08b1efb2a2418c 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c0261b06b4143385cd08b1efb2a2418c != \c\0\2\6\1\b\0\6\b\4\1\4\3\3\8\5\c\d\0\8\b\1\e\f\b\2\a\2\4\1\8\c ]] 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:04.459 11:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:04.718 [2024-11-15 11:32:41.953397] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:29:04.718 [2024-11-15 11:32:41.953750] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81357 ] 00:29:04.977 [2024-11-15 11:32:42.140897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.977 [2024-11-15 11:32:42.266401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.881  [2024-11-15T11:32:44.540Z] Copying: 660/1024 [MB] (660 MBps) [2024-11-15T11:32:45.917Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:29:08.516 00:29:08.516 11:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:08.516 11:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=12ad3090ac7b2300e18cb8f0a8f67f8c 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 12ad3090ac7b2300e18cb8f0a8f67f8c != \1\2\a\d\3\0\9\0\a\c\7\b\2\3\0\0\e\1\8\c\b\8\f\0\a\8\f\6\7\f\8\c ]] 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81201 ]] 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81201 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:10.494 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81417 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81417 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81417 ']' 00:29:10.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:10.495 11:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:10.495 [2024-11-15 11:32:47.710510] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:29:10.495 [2024-11-15 11:32:47.710638] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81417 ] 00:29:10.495 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81201 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:10.495 [2024-11-15 11:32:47.889604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.754 [2024-11-15 11:32:48.004018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.692 [2024-11-15 11:32:48.958911] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.692 [2024-11-15 11:32:48.958980] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.952 [2024-11-15 11:32:49.105843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.106042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:11.952 [2024-11-15 11:32:49.106066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:11.952 [2024-11-15 11:32:49.106077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.106150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.106164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:11.952 [2024-11-15 11:32:49.106183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:29:11.952 [2024-11-15 11:32:49.106193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.106218] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:11.952 [2024-11-15 11:32:49.107163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:11.952 [2024-11-15 11:32:49.107193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.107205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:11.952 [2024-11-15 11:32:49.107216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.982 ms 00:29:11.952 [2024-11-15 11:32:49.107225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.107665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:11.952 [2024-11-15 11:32:49.131928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.132077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:11.952 [2024-11-15 11:32:49.132099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.303 ms 00:29:11.952 [2024-11-15 11:32:49.132126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.146301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.146338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:11.952 [2024-11-15 11:32:49.146354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:11.952 [2024-11-15 11:32:49.146365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.146879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.146895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:11.952 [2024-11-15 11:32:49.146907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.433 ms 00:29:11.952 [2024-11-15 11:32:49.146917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.146978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.146996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:11.952 [2024-11-15 11:32:49.147007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:11.952 [2024-11-15 11:32:49.147017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.147045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.147056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:11.952 [2024-11-15 11:32:49.147067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:11.952 [2024-11-15 11:32:49.147077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.147101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:11.952 [2024-11-15 11:32:49.151056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.952 [2024-11-15 11:32:49.151088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:11.952 [2024-11-15 11:32:49.151100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.966 ms 00:29:11.952 [2024-11-15 11:32:49.151111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.952 [2024-11-15 11:32:49.151142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.953 [2024-11-15 11:32:49.151153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:11.953 [2024-11-15 11:32:49.151164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:11.953 [2024-11-15 11:32:49.151175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.953 [2024-11-15 11:32:49.151213] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:11.953 [2024-11-15 11:32:49.151237] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:11.953 [2024-11-15 11:32:49.151271] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:11.953 [2024-11-15 11:32:49.151292] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:11.953 [2024-11-15 11:32:49.151380] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:11.953 [2024-11-15 11:32:49.151393] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:11.953 [2024-11-15 11:32:49.151406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:11.953 [2024-11-15 11:32:49.151419] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:11.953 [2024-11-15 11:32:49.151431] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:11.953 [2024-11-15 11:32:49.151443] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:11.953 [2024-11-15 11:32:49.151452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:11.953 [2024-11-15 11:32:49.151462] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:11.953 [2024-11-15 11:32:49.151472] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:11.953 [2024-11-15 11:32:49.151482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.953 [2024-11-15 11:32:49.151496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:11.953 [2024-11-15 11:32:49.151506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:29:11.953 [2024-11-15 11:32:49.151516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.953 [2024-11-15 11:32:49.151605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.953 [2024-11-15 11:32:49.151617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:11.953 [2024-11-15 11:32:49.151628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:29:11.953 [2024-11-15 11:32:49.151638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.953 [2024-11-15 11:32:49.151725] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:11.953 [2024-11-15 11:32:49.151738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:11.953 [2024-11-15 11:32:49.151754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.953 [2024-11-15 11:32:49.151764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.151775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:11.953 [2024-11-15 11:32:49.151784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.151794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:11.953 [2024-11-15 11:32:49.151803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:11.953 [2024-11-15 11:32:49.151813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:11.953 [2024-11-15 11:32:49.151822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.151832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:11.953 [2024-11-15 11:32:49.151842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:11.953 [2024-11-15 11:32:49.151851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.151861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:11.953 [2024-11-15 11:32:49.151870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:11.953 [2024-11-15 11:32:49.151879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.151888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:11.953 [2024-11-15 11:32:49.151897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:11.953 [2024-11-15 11:32:49.151906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.151915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:11.953 [2024-11-15 11:32:49.151924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:11.953 [2024-11-15 11:32:49.151933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.953 [2024-11-15 11:32:49.151942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:11.953 [2024-11-15 11:32:49.151962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:11.953 [2024-11-15 11:32:49.151971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.953 [2024-11-15 11:32:49.151980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:11.953 [2024-11-15 11:32:49.151989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:11.953 [2024-11-15 11:32:49.151999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.953 [2024-11-15 11:32:49.152008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:11.953 [2024-11-15 11:32:49.152017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:11.953 [2024-11-15 11:32:49.152026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.953 [2024-11-15 11:32:49.152035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:11.953 [2024-11-15 11:32:49.152045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:11.953 [2024-11-15 11:32:49.152055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.152064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:11.953 [2024-11-15 11:32:49.152073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:11.953 [2024-11-15 11:32:49.152083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.152092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:11.953 [2024-11-15 11:32:49.152102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:11.953 [2024-11-15 11:32:49.152111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.152120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:11.953 [2024-11-15 11:32:49.152129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:11.953 [2024-11-15 11:32:49.152140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.152150] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:11.953 [2024-11-15 11:32:49.152161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:11.953 [2024-11-15 11:32:49.152171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.953 [2024-11-15 11:32:49.152181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.953 [2024-11-15 11:32:49.152190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:11.953 [2024-11-15 11:32:49.152200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:11.953 [2024-11-15 11:32:49.152210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:11.953 [2024-11-15 11:32:49.152219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:11.953 [2024-11-15 11:32:49.152228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:11.953 [2024-11-15 11:32:49.152237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:11.953 [2024-11-15 11:32:49.152248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:11.953 [2024-11-15 11:32:49.152260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.953 [2024-11-15 11:32:49.152271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:11.953 [2024-11-15 11:32:49.152282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:11.953 [2024-11-15 11:32:49.152292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:11.953 [2024-11-15 11:32:49.152302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:11.953 [2024-11-15 11:32:49.152313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:11.953 [2024-11-15 11:32:49.152323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:11.953 [2024-11-15 11:32:49.152334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:11.954 [2024-11-15 11:32:49.152344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:11.954 [2024-11-15 11:32:49.152415] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:11.954 [2024-11-15 11:32:49.152427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:11.954 [2024-11-15 11:32:49.152453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:11.954 [2024-11-15 11:32:49.152463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:11.954 [2024-11-15 11:32:49.152474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:11.954 [2024-11-15 11:32:49.152484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.152494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:11.954 [2024-11-15 11:32:49.152504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.815 ms 00:29:11.954 [2024-11-15 11:32:49.152514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.190185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.190356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:11.954 [2024-11-15 11:32:49.190379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.667 ms 00:29:11.954 [2024-11-15 11:32:49.190390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.190436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.190447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:11.954 [2024-11-15 11:32:49.190458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:11.954 [2024-11-15 11:32:49.190469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.237210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.237248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:11.954 [2024-11-15 11:32:49.237263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.749 ms 00:29:11.954 [2024-11-15 11:32:49.237274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.237317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.237328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:11.954 [2024-11-15 11:32:49.237339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:11.954 [2024-11-15 11:32:49.237349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.237486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.237499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:11.954 [2024-11-15 11:32:49.237511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:11.954 [2024-11-15 11:32:49.237521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.237579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.237591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:11.954 [2024-11-15 11:32:49.237603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:11.954 [2024-11-15 11:32:49.237612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.257148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.257184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:11.954 [2024-11-15 11:32:49.257198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.541 ms 00:29:11.954 [2024-11-15 11:32:49.257229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.257351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.257367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:11.954 [2024-11-15 11:32:49.257378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:11.954 [2024-11-15 11:32:49.257388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.293934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.294094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:11.954 [2024-11-15 11:32:49.294116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.585 ms 00:29:11.954 [2024-11-15 11:32:49.294127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.954 [2024-11-15 11:32:49.308504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.954 [2024-11-15 11:32:49.308540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:11.954 [2024-11-15 11:32:49.308590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:29:11.954 [2024-11-15 11:32:49.308601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.214 [2024-11-15 11:32:49.393300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.214 [2024-11-15 11:32:49.393396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:12.214 [2024-11-15 11:32:49.393421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.769 ms 00:29:12.214 [2024-11-15 11:32:49.393431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.214 [2024-11-15 11:32:49.393644] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:12.214 [2024-11-15 11:32:49.393763] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:12.214 [2024-11-15 11:32:49.393899] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:12.214 [2024-11-15 11:32:49.394016] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:12.214 [2024-11-15 11:32:49.394030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.214 [2024-11-15 11:32:49.394041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:12.214 [2024-11-15 11:32:49.394053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:29:12.214 [2024-11-15 11:32:49.394063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.214 [2024-11-15 11:32:49.394151] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:12.214 [2024-11-15 11:32:49.394177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.214 [2024-11-15 11:32:49.394192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:12.214 [2024-11-15 11:32:49.394203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:12.214 [2024-11-15 11:32:49.394213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.214 [2024-11-15 11:32:49.416484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.214 [2024-11-15 11:32:49.416644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:12.214 [2024-11-15 11:32:49.416667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.282 ms 00:29:12.214 [2024-11-15 11:32:49.416678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.214 [2024-11-15 11:32:49.431017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.214 [2024-11-15 11:32:49.431052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:12.214 [2024-11-15 11:32:49.431065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:12.214 [2024-11-15 11:32:49.431076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.214 [2024-11-15 11:32:49.431171] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:12.214 [2024-11-15 11:32:49.431366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.214 [2024-11-15 11:32:49.431377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:12.214 [2024-11-15 11:32:49.431388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:29:12.214 [2024-11-15 11:32:49.431398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.784 [2024-11-15 11:32:49.971992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.784 [2024-11-15 11:32:49.972066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:12.784 [2024-11-15 11:32:49.972085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 540.369 ms 00:29:12.784 [2024-11-15 11:32:49.972096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.784 [2024-11-15 11:32:49.977818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.784 [2024-11-15 11:32:49.977974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:12.784 [2024-11-15 11:32:49.977997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.283 ms 00:29:12.784 [2024-11-15 11:32:49.978008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.784 [2024-11-15 11:32:49.978546] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:12.784 [2024-11-15 11:32:49.978592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.784 [2024-11-15 11:32:49.978605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:12.784 [2024-11-15 11:32:49.978618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:29:12.784 [2024-11-15 11:32:49.978629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.784 [2024-11-15 11:32:49.978662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.784 [2024-11-15 11:32:49.978674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:12.784 [2024-11-15 11:32:49.978686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:12.784 [2024-11-15 11:32:49.978697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.784 [2024-11-15 11:32:49.978739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 548.457 ms, result 0 00:29:12.784 [2024-11-15 11:32:49.978783] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:12.784 [2024-11-15 11:32:49.978859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.784 [2024-11-15 11:32:49.978869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:12.784 [2024-11-15 11:32:49.978879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:29:12.784 [2024-11-15 11:32:49.978889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.510987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.511056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:13.353 [2024-11-15 11:32:50.511074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 531.800 ms 00:29:13.353 [2024-11-15 11:32:50.511087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.517038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.517198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:13.353 [2024-11-15 11:32:50.517220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.245 ms 00:29:13.353 [2024-11-15 11:32:50.517230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.517647] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:13.353 [2024-11-15 11:32:50.517676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.517687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:13.353 [2024-11-15 11:32:50.517699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.406 ms 00:29:13.353 [2024-11-15 11:32:50.517709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.517739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.517751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:13.353 [2024-11-15 11:32:50.517762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:13.353 [2024-11-15 11:32:50.517772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.517810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 539.901 ms, result 0 00:29:13.353 [2024-11-15 11:32:50.517852] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:13.353 [2024-11-15 11:32:50.517866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:13.353 [2024-11-15 11:32:50.517878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.517889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:13.353 [2024-11-15 11:32:50.517900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1088.492 ms 00:29:13.353 [2024-11-15 11:32:50.517910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.517939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.517951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:13.353 [2024-11-15 11:32:50.517965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:13.353 [2024-11-15 11:32:50.517975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.529250] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:13.353 [2024-11-15 11:32:50.529404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.529418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:13.353 [2024-11-15 11:32:50.529431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.431 ms 00:29:13.353 [2024-11-15 11:32:50.529441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.530031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.530058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:13.353 [2024-11-15 11:32:50.530074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:29:13.353 [2024-11-15 11:32:50.530085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.532093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.532236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:13.353 [2024-11-15 11:32:50.532256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.992 ms 00:29:13.353 [2024-11-15 11:32:50.532268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.532322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.532334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:13.353 [2024-11-15 11:32:50.532345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:13.353 [2024-11-15 11:32:50.532359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.532456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.532469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:13.353 [2024-11-15 11:32:50.532480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:13.353 [2024-11-15 11:32:50.532490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.532511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.532522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:13.353 [2024-11-15 11:32:50.532533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:13.353 [2024-11-15 11:32:50.532543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.532589] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:13.353 [2024-11-15 11:32:50.532602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.532612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:13.353 [2024-11-15 11:32:50.532622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:13.353 [2024-11-15 11:32:50.532632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.532682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.353 [2024-11-15 11:32:50.532693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:13.353 [2024-11-15 11:32:50.532703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:13.353 [2024-11-15 11:32:50.532713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.353 [2024-11-15 11:32:50.533668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1429.681 ms, result 0 00:29:13.353 [2024-11-15 11:32:50.545986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.353 [2024-11-15 11:32:50.561959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:13.353 [2024-11-15 11:32:50.571410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:13.353 Validate MD5 checksum, iteration 1 00:29:13.353 11:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:13.353 11:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:13.353 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:13.353 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:13.353 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:13.353 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:13.353 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:13.354 11:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:13.354 [2024-11-15 11:32:50.709164] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:29:13.354 [2024-11-15 11:32:50.709428] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81453 ] 00:29:13.612 [2024-11-15 11:32:50.877516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.871 [2024-11-15 11:32:51.031399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.772  [2024-11-15T11:32:53.430Z] Copying: 708/1024 [MB] (708 MBps) [2024-11-15T11:32:55.333Z] Copying: 1024/1024 [MB] (average 684 MBps) 00:29:17.932 00:29:17.932 11:32:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:17.932 11:32:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:19.837 Validate MD5 checksum, iteration 2 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c0261b06b4143385cd08b1efb2a2418c 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c0261b06b4143385cd08b1efb2a2418c != \c\0\2\6\1\b\0\6\b\4\1\4\3\3\8\5\c\d\0\8\b\1\e\f\b\2\a\2\4\1\8\c ]] 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:19.837 11:32:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:19.837 [2024-11-15 11:32:56.824146] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:29:19.837 [2024-11-15 11:32:56.824433] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81521 ] 00:29:19.837 [2024-11-15 11:32:57.004829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.837 [2024-11-15 11:32:57.121502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.786  [2024-11-15T11:32:59.446Z] Copying: 698/1024 [MB] (698 MBps) [2024-11-15T11:33:02.735Z] Copying: 1024/1024 [MB] (average 698 MBps) 00:29:25.334 00:29:25.334 11:33:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:25.334 11:33:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=12ad3090ac7b2300e18cb8f0a8f67f8c 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 12ad3090ac7b2300e18cb8f0a8f67f8c != \1\2\a\d\3\0\9\0\a\c\7\b\2\3\0\0\e\1\8\c\b\8\f\0\a\8\f\6\7\f\8\c ]] 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81417 ]] 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81417 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81417 ']' 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81417 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81417 00:29:27.240 killing process with pid 81417 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81417' 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81417 00:29:27.240 11:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81417 00:29:28.177 [2024-11-15 11:33:05.455899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:28.177 [2024-11-15 11:33:05.476013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.476052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:28.177 [2024-11-15 11:33:05.476068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:28.177 [2024-11-15 11:33:05.476095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.476117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:28.177 [2024-11-15 11:33:05.480401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.480430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:28.177 [2024-11-15 11:33:05.480447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.275 ms 00:29:28.177 [2024-11-15 11:33:05.480457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.480671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.480684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:28.177 [2024-11-15 11:33:05.480695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:29:28.177 [2024-11-15 11:33:05.480705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.481894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.481921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:28.177 [2024-11-15 11:33:05.481933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.173 ms 00:29:28.177 [2024-11-15 11:33:05.481943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.482899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.482916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:28.177 [2024-11-15 11:33:05.482927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:29:28.177 [2024-11-15 11:33:05.482937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.497399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.497432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:28.177 [2024-11-15 11:33:05.497446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.443 ms 00:29:28.177 [2024-11-15 11:33:05.497463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.505132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.505164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:28.177 [2024-11-15 11:33:05.505177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.645 ms 00:29:28.177 [2024-11-15 11:33:05.505188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.505370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.505385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:28.177 [2024-11-15 11:33:05.505396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:29:28.177 [2024-11-15 11:33:05.505407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.520566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.520596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:28.177 [2024-11-15 11:33:05.520619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.158 ms 00:29:28.177 [2024-11-15 11:33:05.520629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.535893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.535922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:28.177 [2024-11-15 11:33:05.535934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.239 ms 00:29:28.177 [2024-11-15 11:33:05.535944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.551055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.551084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:28.177 [2024-11-15 11:33:05.551097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.101 ms 00:29:28.177 [2024-11-15 11:33:05.551107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.565349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.177 [2024-11-15 11:33:05.565376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:28.177 [2024-11-15 11:33:05.565387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.197 ms 00:29:28.177 [2024-11-15 11:33:05.565395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.177 [2024-11-15 11:33:05.565430] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:28.177 [2024-11-15 11:33:05.565445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:28.177 [2024-11-15 11:33:05.565456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:28.177 [2024-11-15 11:33:05.565467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:28.177 [2024-11-15 11:33:05.565476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:28.177 [2024-11-15 11:33:05.565653] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:28.177 [2024-11-15 11:33:05.565664] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 47b180a1-4151-470a-a496-2e1e525ea212 00:29:28.177 [2024-11-15 11:33:05.565674] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:28.177 [2024-11-15 11:33:05.565684] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:28.177 [2024-11-15 11:33:05.565693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:28.177 [2024-11-15 11:33:05.565703] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:28.178 [2024-11-15 11:33:05.565712] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:28.178 [2024-11-15 11:33:05.565722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:28.178 [2024-11-15 11:33:05.565732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:28.178 [2024-11-15 11:33:05.565741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:28.178 [2024-11-15 11:33:05.565750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:28.178 [2024-11-15 11:33:05.565761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.178 [2024-11-15 11:33:05.565777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:28.178 [2024-11-15 11:33:05.565789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:29:28.178 [2024-11-15 11:33:05.565799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.437 [2024-11-15 11:33:05.585418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.437 [2024-11-15 11:33:05.585448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:28.437 [2024-11-15 11:33:05.585460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.633 ms 00:29:28.437 [2024-11-15 11:33:05.585469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.437 [2024-11-15 11:33:05.586016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.437 [2024-11-15 11:33:05.586031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:28.437 [2024-11-15 11:33:05.586041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:29:28.437 [2024-11-15 11:33:05.586052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.437 [2024-11-15 11:33:05.648805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.437 [2024-11-15 11:33:05.648835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:28.437 [2024-11-15 11:33:05.648847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.437 [2024-11-15 11:33:05.648857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.437 [2024-11-15 11:33:05.648892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.437 [2024-11-15 11:33:05.648902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:28.437 [2024-11-15 11:33:05.648912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.437 [2024-11-15 11:33:05.648922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.437 [2024-11-15 11:33:05.648995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.437 [2024-11-15 11:33:05.649007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:28.437 [2024-11-15 11:33:05.649018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.437 [2024-11-15 11:33:05.649027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.437 [2024-11-15 11:33:05.649044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.437 [2024-11-15 11:33:05.649059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:28.437 [2024-11-15 11:33:05.649068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.437 [2024-11-15 11:33:05.649077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.437 [2024-11-15 11:33:05.766901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.437 [2024-11-15 11:33:05.766945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:28.437 [2024-11-15 11:33:05.766975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.437 [2024-11-15 11:33:05.766985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.697 [2024-11-15 11:33:05.864060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:28.697 [2024-11-15 11:33:05.864074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.697 [2024-11-15 11:33:05.864084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.697 [2024-11-15 11:33:05.864192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:28.697 [2024-11-15 11:33:05.864203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.697 [2024-11-15 11:33:05.864212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.697 [2024-11-15 11:33:05.864266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:28.697 [2024-11-15 11:33:05.864281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.697 [2024-11-15 11:33:05.864299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.697 [2024-11-15 11:33:05.864436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:28.697 [2024-11-15 11:33:05.864446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.697 [2024-11-15 11:33:05.864455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.697 [2024-11-15 11:33:05.864520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:28.697 [2024-11-15 11:33:05.864531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.697 [2024-11-15 11:33:05.864545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.697 [2024-11-15 11:33:05.864615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:28.697 [2024-11-15 11:33:05.864626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.697 [2024-11-15 11:33:05.864636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.697 [2024-11-15 11:33:05.864692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:28.697 [2024-11-15 11:33:05.864707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.697 [2024-11-15 11:33:05.864716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.697 [2024-11-15 11:33:05.864839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 389.420 ms, result 0 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:30.076 Remove shared memory files 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81201 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:30.076 00:29:30.076 real 1m34.491s 00:29:30.076 user 2m7.785s 00:29:30.076 sys 0m24.218s 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:30.076 11:33:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.076 ************************************ 00:29:30.076 END TEST ftl_upgrade_shutdown 00:29:30.076 ************************************ 00:29:30.076 11:33:07 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:30.076 11:33:07 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:30.076 11:33:07 ftl -- ftl/ftl.sh@14 -- # killprocess 74022 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@952 -- # '[' -z 74022 ']' 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@956 -- # kill -0 74022 00:29:30.076 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74022) - No such process 00:29:30.076 Process with pid 74022 is not found 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 74022 is not found' 00:29:30.076 11:33:07 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:30.076 11:33:07 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81667 00:29:30.076 11:33:07 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:30.076 11:33:07 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81667 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@833 -- # '[' -z 81667 ']' 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:30.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:30.076 11:33:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:30.076 [2024-11-15 11:33:07.299576] Starting SPDK v25.01-pre git sha1 57db986b9 / DPDK 24.03.0 initialization... 00:29:30.076 [2024-11-15 11:33:07.299696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81667 ] 00:29:30.336 [2024-11-15 11:33:07.482244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.336 [2024-11-15 11:33:07.595267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.272 11:33:08 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:31.272 11:33:08 ftl -- common/autotest_common.sh@866 -- # return 0 00:29:31.272 11:33:08 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:31.531 nvme0n1 00:29:31.531 11:33:08 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:31.531 11:33:08 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:31.531 11:33:08 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:31.789 11:33:08 ftl -- ftl/common.sh@28 -- # stores=b3588499-660c-401b-8529-c165c6f338ca 00:29:31.789 11:33:08 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:31.789 11:33:08 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3588499-660c-401b-8529-c165c6f338ca 00:29:31.789 11:33:09 ftl -- ftl/ftl.sh@23 -- # killprocess 81667 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@952 -- # '[' -z 81667 ']' 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@956 -- # kill -0 81667 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@957 -- # uname 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81667 00:29:31.789 killing process with pid 81667 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81667' 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@971 -- # kill 81667 00:29:31.789 11:33:09 ftl -- common/autotest_common.sh@976 -- # wait 81667 00:29:34.326 11:33:11 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:34.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:34.585 Waiting for block devices as requested 00:29:34.585 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:34.844 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:34.844 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:35.103 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:40.376 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:40.376 11:33:17 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:40.376 11:33:17 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:40.376 Remove shared memory files 00:29:40.376 11:33:17 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:40.376 11:33:17 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:40.376 11:33:17 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:40.376 11:33:17 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:40.376 11:33:17 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:40.376 00:29:40.376 real 11m15.056s 00:29:40.376 user 13m46.262s 00:29:40.376 sys 1m33.933s 00:29:40.376 11:33:17 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:40.376 11:33:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:40.376 ************************************ 00:29:40.376 END TEST ftl 00:29:40.376 ************************************ 00:29:40.376 11:33:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:40.376 11:33:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:40.376 11:33:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:40.376 11:33:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:40.376 11:33:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:40.376 11:33:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:40.376 11:33:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:40.376 11:33:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:40.376 11:33:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:40.376 11:33:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:40.376 11:33:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.377 11:33:17 -- common/autotest_common.sh@10 -- # set +x 00:29:40.377 11:33:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:40.377 11:33:17 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:40.377 11:33:17 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:40.377 11:33:17 -- common/autotest_common.sh@10 -- # set +x 00:29:42.312 INFO: APP EXITING 00:29:42.312 INFO: killing all VMs 00:29:42.312 INFO: killing vhost app 00:29:42.312 INFO: EXIT DONE 00:29:42.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:43.139 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:43.139 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:43.139 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:43.139 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:43.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:44.275 Cleaning 00:29:44.275 Removing: /var/run/dpdk/spdk0/config 00:29:44.275 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:44.275 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:44.275 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:44.275 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:44.275 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:44.275 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:44.275 Removing: /var/run/dpdk/spdk0 00:29:44.275 Removing: /var/run/dpdk/spdk_pid57529 00:29:44.275 Removing: /var/run/dpdk/spdk_pid57775 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58015 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58119 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58175 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58314 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58338 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58553 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58659 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58777 00:29:44.275 Removing: /var/run/dpdk/spdk_pid58899 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59013 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59058 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59094 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59165 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59271 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59731 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59806 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59892 00:29:44.275 Removing: /var/run/dpdk/spdk_pid59914 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60077 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60104 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60263 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60285 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60354 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60378 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60446 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60471 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60666 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60708 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60797 00:29:44.275 Removing: /var/run/dpdk/spdk_pid60991 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61098 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61146 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61601 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61710 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61825 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61883 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61909 00:29:44.275 Removing: /var/run/dpdk/spdk_pid61993 00:29:44.275 Removing: /var/run/dpdk/spdk_pid62650 00:29:44.275 Removing: /var/run/dpdk/spdk_pid62698 00:29:44.275 Removing: /var/run/dpdk/spdk_pid63190 00:29:44.275 Removing: /var/run/dpdk/spdk_pid63299 00:29:44.275 Removing: /var/run/dpdk/spdk_pid63415 00:29:44.275 Removing: /var/run/dpdk/spdk_pid63473 00:29:44.275 Removing: /var/run/dpdk/spdk_pid63504 00:29:44.275 Removing: /var/run/dpdk/spdk_pid63530 00:29:44.275 Removing: /var/run/dpdk/spdk_pid65430 00:29:44.275 Removing: /var/run/dpdk/spdk_pid65578 00:29:44.275 Removing: /var/run/dpdk/spdk_pid65588 00:29:44.275 Removing: /var/run/dpdk/spdk_pid65605 00:29:44.275 Removing: /var/run/dpdk/spdk_pid65644 00:29:44.275 Removing: /var/run/dpdk/spdk_pid65648 00:29:44.275 Removing: /var/run/dpdk/spdk_pid65660 00:29:44.534 Removing: /var/run/dpdk/spdk_pid65705 00:29:44.535 Removing: /var/run/dpdk/spdk_pid65709 00:29:44.535 Removing: /var/run/dpdk/spdk_pid65721 00:29:44.535 Removing: /var/run/dpdk/spdk_pid65771 00:29:44.535 Removing: /var/run/dpdk/spdk_pid65775 00:29:44.535 Removing: /var/run/dpdk/spdk_pid65787 00:29:44.535 Removing: /var/run/dpdk/spdk_pid67190 00:29:44.535 Removing: /var/run/dpdk/spdk_pid67304 00:29:44.535 Removing: /var/run/dpdk/spdk_pid68736 00:29:44.535 Removing: /var/run/dpdk/spdk_pid70096 00:29:44.535 Removing: /var/run/dpdk/spdk_pid70205 00:29:44.535 Removing: /var/run/dpdk/spdk_pid70315 00:29:44.535 Removing: /var/run/dpdk/spdk_pid70424 00:29:44.535 Removing: /var/run/dpdk/spdk_pid70550 00:29:44.535 Removing: /var/run/dpdk/spdk_pid70631 00:29:44.535 Removing: /var/run/dpdk/spdk_pid70784 00:29:44.535 Removing: /var/run/dpdk/spdk_pid71160 00:29:44.535 Removing: /var/run/dpdk/spdk_pid71202 00:29:44.535 Removing: /var/run/dpdk/spdk_pid71656 00:29:44.535 Removing: /var/run/dpdk/spdk_pid71842 00:29:44.535 Removing: /var/run/dpdk/spdk_pid71947 00:29:44.535 Removing: /var/run/dpdk/spdk_pid72058 00:29:44.535 Removing: /var/run/dpdk/spdk_pid72117 00:29:44.535 Removing: /var/run/dpdk/spdk_pid72147 00:29:44.535 Removing: /var/run/dpdk/spdk_pid72455 00:29:44.535 Removing: /var/run/dpdk/spdk_pid72530 00:29:44.535 Removing: /var/run/dpdk/spdk_pid72617 00:29:44.535 Removing: /var/run/dpdk/spdk_pid73059 00:29:44.535 Removing: /var/run/dpdk/spdk_pid73211 00:29:44.535 Removing: /var/run/dpdk/spdk_pid74022 00:29:44.535 Removing: /var/run/dpdk/spdk_pid74165 00:29:44.535 Removing: /var/run/dpdk/spdk_pid74370 00:29:44.535 Removing: /var/run/dpdk/spdk_pid74487 00:29:44.535 Removing: /var/run/dpdk/spdk_pid74812 00:29:44.535 Removing: /var/run/dpdk/spdk_pid75089 00:29:44.535 Removing: /var/run/dpdk/spdk_pid75441 00:29:44.535 Removing: /var/run/dpdk/spdk_pid75656 00:29:44.535 Removing: /var/run/dpdk/spdk_pid75786 00:29:44.535 Removing: /var/run/dpdk/spdk_pid75850 00:29:44.535 Removing: /var/run/dpdk/spdk_pid75988 00:29:44.535 Removing: /var/run/dpdk/spdk_pid76025 00:29:44.535 Removing: /var/run/dpdk/spdk_pid76091 00:29:44.535 Removing: /var/run/dpdk/spdk_pid76289 00:29:44.535 Removing: /var/run/dpdk/spdk_pid76531 00:29:44.535 Removing: /var/run/dpdk/spdk_pid76912 00:29:44.535 Removing: /var/run/dpdk/spdk_pid77335 00:29:44.535 Removing: /var/run/dpdk/spdk_pid77748 00:29:44.535 Removing: /var/run/dpdk/spdk_pid78230 00:29:44.535 Removing: /var/run/dpdk/spdk_pid78383 00:29:44.535 Removing: /var/run/dpdk/spdk_pid78477 00:29:44.535 Removing: /var/run/dpdk/spdk_pid79128 00:29:44.535 Removing: /var/run/dpdk/spdk_pid79207 00:29:44.535 Removing: /var/run/dpdk/spdk_pid79699 00:29:44.535 Removing: /var/run/dpdk/spdk_pid80082 00:29:44.535 Removing: /var/run/dpdk/spdk_pid80591 00:29:44.535 Removing: /var/run/dpdk/spdk_pid80735 00:29:44.535 Removing: /var/run/dpdk/spdk_pid80797 00:29:44.535 Removing: /var/run/dpdk/spdk_pid80862 00:29:44.535 Removing: /var/run/dpdk/spdk_pid80921 00:29:44.794 Removing: /var/run/dpdk/spdk_pid80992 00:29:44.794 Removing: /var/run/dpdk/spdk_pid81201 00:29:44.794 Removing: /var/run/dpdk/spdk_pid81290 00:29:44.794 Removing: /var/run/dpdk/spdk_pid81357 00:29:44.794 Removing: /var/run/dpdk/spdk_pid81417 00:29:44.794 Removing: /var/run/dpdk/spdk_pid81453 00:29:44.794 Removing: /var/run/dpdk/spdk_pid81521 00:29:44.794 Removing: /var/run/dpdk/spdk_pid81667 00:29:44.794 Clean 00:29:44.794 11:33:22 -- common/autotest_common.sh@1451 -- # return 0 00:29:44.794 11:33:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:44.794 11:33:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.794 11:33:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.794 11:33:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:44.794 11:33:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.794 11:33:22 -- common/autotest_common.sh@10 -- # set +x 00:29:44.794 11:33:22 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:44.794 11:33:22 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:44.794 11:33:22 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:44.794 11:33:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:44.794 11:33:22 -- spdk/autotest.sh@394 -- # hostname 00:29:44.794 11:33:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:45.053 geninfo: WARNING: invalid characters removed from testname! 00:30:11.605 11:33:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:14.141 11:33:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:16.046 11:33:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:18.588 11:33:55 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:21.124 11:33:58 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:23.030 11:34:00 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:25.566 11:34:02 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:25.566 11:34:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:25.566 11:34:02 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:25.566 11:34:02 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:25.566 11:34:02 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:25.566 11:34:02 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:25.566 + [[ -n 5244 ]] 00:30:25.566 + sudo kill 5244 00:30:25.576 [Pipeline] } 00:30:25.592 [Pipeline] // timeout 00:30:25.597 [Pipeline] } 00:30:25.612 [Pipeline] // stage 00:30:25.617 [Pipeline] } 00:30:25.631 [Pipeline] // catchError 00:30:25.642 [Pipeline] stage 00:30:25.645 [Pipeline] { (Stop VM) 00:30:25.658 [Pipeline] sh 00:30:25.941 + vagrant halt 00:30:29.311 ==> default: Halting domain... 00:30:35.892 [Pipeline] sh 00:30:36.172 + vagrant destroy -f 00:30:38.782 ==> default: Removing domain... 00:30:39.368 [Pipeline] sh 00:30:39.650 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:39.660 [Pipeline] } 00:30:39.676 [Pipeline] // stage 00:30:39.681 [Pipeline] } 00:30:39.695 [Pipeline] // dir 00:30:39.700 [Pipeline] } 00:30:39.714 [Pipeline] // wrap 00:30:39.720 [Pipeline] } 00:30:39.733 [Pipeline] // catchError 00:30:39.742 [Pipeline] stage 00:30:39.744 [Pipeline] { (Epilogue) 00:30:39.759 [Pipeline] sh 00:30:40.044 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:45.334 [Pipeline] catchError 00:30:45.337 [Pipeline] { 00:30:45.351 [Pipeline] sh 00:30:45.635 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:45.635 Artifacts sizes are good 00:30:45.644 [Pipeline] } 00:30:45.660 [Pipeline] // catchError 00:30:45.673 [Pipeline] archiveArtifacts 00:30:45.682 Archiving artifacts 00:30:45.798 [Pipeline] cleanWs 00:30:45.812 [WS-CLEANUP] Deleting project workspace... 00:30:45.812 [WS-CLEANUP] Deferred wipeout is used... 00:30:45.819 [WS-CLEANUP] done 00:30:45.821 [Pipeline] } 00:30:45.838 [Pipeline] // stage 00:30:45.844 [Pipeline] } 00:30:45.859 [Pipeline] // node 00:30:45.868 [Pipeline] End of Pipeline 00:30:45.908 Finished: SUCCESS